00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4061 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.132 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.133 The recommended git tool is: git 00:00:00.133 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.203 Fetching changes from the remote Git repository 00:00:00.213 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.253 Using shallow fetch with depth 1 00:00:00.253 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.253 > git --version # timeout=10 00:00:00.285 > git --version # 'git version 2.39.2' 00:00:00.285 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.306 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.306 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.110 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.123 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.135 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.136 > git config core.sparsecheckout # timeout=10 00:00:08.149 > git read-tree -mu HEAD # timeout=10 00:00:08.165 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.186 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.186 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.314 [Pipeline] Start of Pipeline 00:00:08.330 [Pipeline] library 00:00:08.331 Loading library shm_lib@master 00:00:08.331 Library shm_lib@master is cached. Copying from home. 00:00:08.348 [Pipeline] node 00:00:08.362 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.364 [Pipeline] { 00:00:08.375 [Pipeline] catchError 00:00:08.377 [Pipeline] { 00:00:08.390 [Pipeline] wrap 00:00:08.397 [Pipeline] { 00:00:08.405 [Pipeline] stage 00:00:08.407 [Pipeline] { (Prologue) 00:00:08.422 [Pipeline] echo 00:00:08.423 Node: VM-host-SM38 00:00:08.427 [Pipeline] cleanWs 00:00:08.437 [WS-CLEANUP] Deleting project workspace... 00:00:08.437 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.444 [WS-CLEANUP] done 00:00:08.694 [Pipeline] setCustomBuildProperty 00:00:08.787 [Pipeline] httpRequest 00:00:09.567 [Pipeline] echo 00:00:09.569 Sorcerer 10.211.164.20 is alive 00:00:09.578 [Pipeline] retry 00:00:09.580 [Pipeline] { 00:00:09.593 [Pipeline] httpRequest 00:00:09.598 HttpMethod: GET 00:00:09.599 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.600 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.601 Response Code: HTTP/1.1 200 OK 00:00:09.601 Success: Status code 200 is in the accepted range: 200,404 00:00:09.602 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.691 [Pipeline] } 00:00:12.709 [Pipeline] // retry 00:00:12.716 [Pipeline] sh 00:00:13.003 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.022 [Pipeline] httpRequest 00:00:13.381 [Pipeline] echo 00:00:13.383 Sorcerer 10.211.164.20 is alive 00:00:13.392 [Pipeline] retry 00:00:13.394 [Pipeline] { 00:00:13.408 [Pipeline] httpRequest 00:00:13.413 HttpMethod: GET 00:00:13.414 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:13.415 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:13.434 Response Code: HTTP/1.1 200 OK 00:00:13.434 Success: Status code 200 is in the accepted range: 200,404 00:00:13.435 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:33.641 [Pipeline] } 00:01:33.661 [Pipeline] // retry 00:01:33.671 [Pipeline] sh 00:01:33.958 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:37.281 [Pipeline] sh 00:01:37.567 + git -C spdk log --oneline -n5 00:01:37.567 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:37.567 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:37.567 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:37.567 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:37.567 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:37.589 [Pipeline] withCredentials 00:01:37.602 > git --version # timeout=10 00:01:37.615 > git --version # 'git version 2.39.2' 00:01:37.635 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:37.638 [Pipeline] { 00:01:37.647 [Pipeline] retry 00:01:37.649 [Pipeline] { 00:01:37.664 [Pipeline] sh 00:01:37.949 + git ls-remote http://dpdk.org/git/dpdk main 00:01:37.963 [Pipeline] } 00:01:37.982 [Pipeline] // retry 00:01:37.989 [Pipeline] } 00:01:38.008 [Pipeline] // withCredentials 00:01:38.020 [Pipeline] httpRequest 00:01:38.483 [Pipeline] echo 00:01:38.485 Sorcerer 10.211.164.20 is alive 00:01:38.494 [Pipeline] retry 00:01:38.497 [Pipeline] { 00:01:38.511 [Pipeline] httpRequest 00:01:38.516 HttpMethod: GET 00:01:38.517 URL: http://10.211.164.20/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:38.518 Sending request to url: http://10.211.164.20/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:38.522 Response Code: HTTP/1.1 200 OK 00:01:38.523 Success: Status code 200 is in the accepted range: 200,404 00:01:38.523 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:57.163 [Pipeline] } 00:01:57.180 [Pipeline] // retry 00:01:57.188 [Pipeline] sh 00:01:57.476 + tar --no-same-owner -xf dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:58.880 [Pipeline] sh 00:01:59.167 + git -C dpdk log --oneline -n5 00:01:59.167 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:01:59.167 0c0cd5ffb0 version: 24.11-rc3 00:01:59.167 8c9a7471a0 dts: add checksum offload test suite 00:01:59.168 bee7cf823c dts: add checksum offload to testpmd shell 00:01:59.168 2eef9a80df dts: add dynamic queue test suite 00:01:59.188 [Pipeline] writeFile 00:01:59.205 [Pipeline] sh 00:01:59.495 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.509 [Pipeline] sh 00:01:59.794 + cat autorun-spdk.conf 00:01:59.795 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.795 SPDK_TEST_NVME=1 00:01:59.795 SPDK_TEST_FTL=1 00:01:59.795 SPDK_TEST_ISAL=1 00:01:59.795 SPDK_RUN_ASAN=1 00:01:59.795 SPDK_RUN_UBSAN=1 00:01:59.795 SPDK_TEST_XNVME=1 00:01:59.795 SPDK_TEST_NVME_FDP=1 00:01:59.795 SPDK_TEST_NATIVE_DPDK=main 00:01:59.795 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.795 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.803 RUN_NIGHTLY=1 00:01:59.804 [Pipeline] } 00:01:59.819 [Pipeline] // stage 00:01:59.835 [Pipeline] stage 00:01:59.837 [Pipeline] { (Run VM) 00:01:59.849 [Pipeline] sh 00:02:00.133 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.133 + echo 'Start stage prepare_nvme.sh' 00:02:00.133 Start stage prepare_nvme.sh 00:02:00.133 + [[ -n 7 ]] 00:02:00.133 + disk_prefix=ex7 00:02:00.133 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:00.133 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:00.133 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:00.133 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.133 ++ SPDK_TEST_NVME=1 00:02:00.133 ++ SPDK_TEST_FTL=1 00:02:00.133 ++ SPDK_TEST_ISAL=1 00:02:00.133 ++ SPDK_RUN_ASAN=1 00:02:00.133 ++ SPDK_RUN_UBSAN=1 00:02:00.133 ++ SPDK_TEST_XNVME=1 00:02:00.133 ++ SPDK_TEST_NVME_FDP=1 00:02:00.133 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:00.133 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.133 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.133 ++ RUN_NIGHTLY=1 00:02:00.133 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:00.133 + nvme_files=() 00:02:00.133 + declare -A nvme_files 00:02:00.133 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.133 + nvme_files['nvme.img']=5G 00:02:00.133 + nvme_files['nvme-cmb.img']=5G 00:02:00.133 + nvme_files['nvme-multi0.img']=4G 00:02:00.133 + nvme_files['nvme-multi1.img']=4G 00:02:00.133 + nvme_files['nvme-multi2.img']=4G 00:02:00.133 + nvme_files['nvme-openstack.img']=8G 00:02:00.133 + nvme_files['nvme-zns.img']=5G 00:02:00.133 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.133 + (( SPDK_TEST_FTL == 1 )) 00:02:00.133 + nvme_files["nvme-ftl.img"]=6G 00:02:00.133 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.133 + nvme_files["nvme-fdp.img"]=1G 00:02:00.133 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.133 + for nvme in "${!nvme_files[@]}" 00:02:00.133 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:00.133 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.133 + for nvme in "${!nvme_files[@]}" 00:02:00.133 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:02:00.394 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:00.394 + for nvme in "${!nvme_files[@]}" 00:02:00.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:00.394 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.394 + for nvme in "${!nvme_files[@]}" 00:02:00.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:00.394 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.394 + for nvme in "${!nvme_files[@]}" 00:02:00.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:00.966 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.966 + for nvme in "${!nvme_files[@]}" 00:02:00.966 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:00.966 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.966 + for nvme in "${!nvme_files[@]}" 00:02:00.966 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:00.966 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.966 + for nvme in "${!nvme_files[@]}" 00:02:00.966 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:02:00.966 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:00.966 + for nvme in "${!nvme_files[@]}" 00:02:00.966 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:01.540 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.540 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:01.540 + echo 'End stage prepare_nvme.sh' 00:02:01.540 End stage prepare_nvme.sh 00:02:01.555 [Pipeline] sh 00:02:01.908 + DISTRO=fedora39 00:02:01.909 + CPUS=10 00:02:01.909 + RAM=12288 00:02:01.909 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.909 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:01.909 00:02:01.909 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:01.909 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:01.909 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:01.909 HELP=0 00:02:01.909 DRY_RUN=0 00:02:01.909 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:02:01.909 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:01.909 NVME_AUTO_CREATE=0 00:02:01.909 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:02:01.909 NVME_CMB=,,,, 00:02:01.909 NVME_PMR=,,,, 00:02:01.909 NVME_ZNS=,,,, 00:02:01.909 NVME_MS=true,,,, 00:02:01.909 NVME_FDP=,,,on, 00:02:01.909 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.909 SPDK_VAGRANT_VMCPU=10 00:02:01.909 SPDK_VAGRANT_VMRAM=12288 00:02:01.909 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.909 SPDK_VAGRANT_HTTP_PROXY= 00:02:01.909 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.909 SPDK_OPENSTACK_NETWORK=0 00:02:01.909 VAGRANT_PACKAGE_BOX=0 00:02:01.909 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.909 FORCE_DISTRO=true 00:02:01.909 VAGRANT_BOX_VERSION= 00:02:01.909 EXTRA_VAGRANTFILES= 00:02:01.909 NIC_MODEL=e1000 00:02:01.909 00:02:01.909 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:01.909 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:04.454 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.454 ==> default: Creating image (snapshot of base box volume). 00:02:04.715 ==> default: Creating domain with the following settings... 00:02:04.715 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732158651_b166e83a4655806d12b8 00:02:04.715 ==> default: -- Domain type: kvm 00:02:04.715 ==> default: -- Cpus: 10 00:02:04.715 ==> default: -- Feature: acpi 00:02:04.715 ==> default: -- Feature: apic 00:02:04.715 ==> default: -- Feature: pae 00:02:04.715 ==> default: -- Memory: 12288M 00:02:04.715 ==> default: -- Memory Backing: hugepages: 00:02:04.715 ==> default: -- Management MAC: 00:02:04.715 ==> default: -- Loader: 00:02:04.715 ==> default: -- Nvram: 00:02:04.715 ==> default: -- Base box: spdk/fedora39 00:02:04.715 ==> default: -- Storage pool: default 00:02:04.715 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732158651_b166e83a4655806d12b8.img (20G) 00:02:04.715 ==> default: -- Volume Cache: default 00:02:04.715 ==> default: -- Kernel: 00:02:04.715 ==> default: -- Initrd: 00:02:04.715 ==> default: -- Graphics Type: vnc 00:02:04.715 ==> default: -- Graphics Port: -1 00:02:04.715 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.715 ==> default: -- Graphics Password: Not defined 00:02:04.715 ==> default: -- Video Type: cirrus 00:02:04.715 ==> default: -- Video VRAM: 9216 00:02:04.715 ==> default: -- Sound Type: 00:02:04.715 ==> default: -- Keymap: en-us 00:02:04.715 ==> default: -- TPM Path: 00:02:04.715 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.715 ==> default: -- Command line args: 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.715 ==> default: -> value=-drive, 00:02:04.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.715 ==> default: -> value=-drive, 00:02:04.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:04.715 ==> default: -> value=-drive, 00:02:04.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.715 ==> default: -> value=-drive, 00:02:04.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.715 ==> default: -> value=-drive, 00:02:04.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:04.715 ==> default: -> value=-device, 00:02:04.715 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:04.716 ==> default: -> value=-drive, 00:02:04.716 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:04.716 ==> default: -> value=-device, 00:02:04.716 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.716 ==> default: Creating shared folders metadata... 00:02:04.976 ==> default: Starting domain. 00:02:06.891 ==> default: Waiting for domain to get an IP address... 00:02:28.865 ==> default: Waiting for SSH to become available... 00:02:28.865 ==> default: Configuring and enabling network interfaces... 00:02:31.397 default: SSH address: 192.168.121.68:22 00:02:31.397 default: SSH username: vagrant 00:02:31.397 default: SSH auth method: private key 00:02:33.296 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:39.943 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:44.125 ==> default: Mounting SSHFS shared folder... 00:02:46.026 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:46.026 ==> default: Checking Mount.. 00:02:46.960 ==> default: Folder Successfully Mounted! 00:02:46.960 00:02:46.960 SUCCESS! 00:02:46.960 00:02:46.960 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:46.960 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:46.960 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:46.960 00:02:46.968 [Pipeline] } 00:02:46.983 [Pipeline] // stage 00:02:46.993 [Pipeline] dir 00:02:46.993 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:46.995 [Pipeline] { 00:02:47.008 [Pipeline] catchError 00:02:47.010 [Pipeline] { 00:02:47.022 [Pipeline] sh 00:02:47.344 + vagrant ssh-config --host vagrant 00:02:47.344 + sed -ne '/^Host/,$p' 00:02:47.344 + tee ssh_conf 00:02:49.898 Host vagrant 00:02:49.898 HostName 192.168.121.68 00:02:49.898 User vagrant 00:02:49.898 Port 22 00:02:49.898 UserKnownHostsFile /dev/null 00:02:49.898 StrictHostKeyChecking no 00:02:49.898 PasswordAuthentication no 00:02:49.898 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:49.898 IdentitiesOnly yes 00:02:49.898 LogLevel FATAL 00:02:49.898 ForwardAgent yes 00:02:49.898 ForwardX11 yes 00:02:49.898 00:02:49.912 [Pipeline] withEnv 00:02:49.915 [Pipeline] { 00:02:49.930 [Pipeline] sh 00:02:50.209 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:50.209 source /etc/os-release 00:02:50.209 [[ -e /image.version ]] && img=$(< /image.version) 00:02:50.209 # Minimal, systemd-like check. 00:02:50.209 if [[ -e /.dockerenv ]]; then 00:02:50.209 # Clear garbage from the node'\''s name: 00:02:50.209 # agt-er_autotest_547-896 -> autotest_547-896 00:02:50.209 # $HOSTNAME is the actual container id 00:02:50.209 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:50.209 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:50.209 # We can assume this is a mount from a host where container is running, 00:02:50.209 # so fetch its hostname to easily identify the target swarm worker. 00:02:50.209 container="$(< /etc/hostname) ($agent)" 00:02:50.209 else 00:02:50.209 # Fallback 00:02:50.209 container=$agent 00:02:50.209 fi 00:02:50.209 fi 00:02:50.209 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:50.209 ' 00:02:50.219 [Pipeline] } 00:02:50.238 [Pipeline] // withEnv 00:02:50.248 [Pipeline] setCustomBuildProperty 00:02:50.266 [Pipeline] stage 00:02:50.269 [Pipeline] { (Tests) 00:02:50.287 [Pipeline] sh 00:02:50.565 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:50.837 [Pipeline] sh 00:02:51.115 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:51.131 [Pipeline] timeout 00:02:51.132 Timeout set to expire in 50 min 00:02:51.134 [Pipeline] { 00:02:51.149 [Pipeline] sh 00:02:51.429 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:51.995 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:52.007 [Pipeline] sh 00:02:52.283 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:52.555 [Pipeline] sh 00:02:52.832 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.847 [Pipeline] sh 00:02:53.125 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:53.126 ++ readlink -f spdk_repo 00:02:53.126 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:53.126 + [[ -n /home/vagrant/spdk_repo ]] 00:02:53.126 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:53.126 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:53.126 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:53.126 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:53.126 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:53.126 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:53.126 + cd /home/vagrant/spdk_repo 00:02:53.126 + source /etc/os-release 00:02:53.126 ++ NAME='Fedora Linux' 00:02:53.126 ++ VERSION='39 (Cloud Edition)' 00:02:53.126 ++ ID=fedora 00:02:53.126 ++ VERSION_ID=39 00:02:53.126 ++ VERSION_CODENAME= 00:02:53.126 ++ PLATFORM_ID=platform:f39 00:02:53.126 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:53.126 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:53.126 ++ LOGO=fedora-logo-icon 00:02:53.126 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:53.126 ++ HOME_URL=https://fedoraproject.org/ 00:02:53.126 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:53.126 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:53.126 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:53.126 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:53.126 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:53.126 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:53.126 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:53.126 ++ SUPPORT_END=2024-11-12 00:02:53.126 ++ VARIANT='Cloud Edition' 00:02:53.126 ++ VARIANT_ID=cloud 00:02:53.126 + uname -a 00:02:53.126 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:53.126 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.693 Hugepages 00:02:53.693 node hugesize free / total 00:02:53.693 node0 1048576kB 0 / 0 00:02:53.693 node0 2048kB 0 / 0 00:02:53.693 00:02:53.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.693 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.952 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.952 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:53.952 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:53.952 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:53.952 + rm -f /tmp/spdk-ld-path 00:02:53.952 + source autorun-spdk.conf 00:02:53.952 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.952 ++ SPDK_TEST_NVME=1 00:02:53.952 ++ SPDK_TEST_FTL=1 00:02:53.952 ++ SPDK_TEST_ISAL=1 00:02:53.952 ++ SPDK_RUN_ASAN=1 00:02:53.952 ++ SPDK_RUN_UBSAN=1 00:02:53.952 ++ SPDK_TEST_XNVME=1 00:02:53.952 ++ SPDK_TEST_NVME_FDP=1 00:02:53.952 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:53.952 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.952 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.952 ++ RUN_NIGHTLY=1 00:02:53.952 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.952 + [[ -n '' ]] 00:02:53.952 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.952 + for M in /var/spdk/build-*-manifest.txt 00:02:53.952 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.952 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.952 + for M in /var/spdk/build-*-manifest.txt 00:02:53.952 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.952 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.952 + for M in /var/spdk/build-*-manifest.txt 00:02:53.952 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.952 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.952 ++ uname 00:02:53.952 + [[ Linux == \L\i\n\u\x ]] 00:02:53.952 + sudo dmesg -T 00:02:53.952 + sudo dmesg --clear 00:02:53.952 + dmesg_pid=5754 00:02:53.952 + [[ Fedora Linux == FreeBSD ]] 00:02:53.952 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.952 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.952 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.952 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.952 + sudo dmesg -Tw 00:02:53.952 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.952 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.952 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.952 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.952 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.952 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.952 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.952 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.952 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.952 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.952 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.952 03:11:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:53.952 03:11:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.952 03:11:41 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:53.952 03:11:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:53.952 03:11:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.952 03:11:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:53.952 03:11:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.952 03:11:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:53.952 03:11:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:53.952 03:11:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.952 03:11:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.952 03:11:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.952 03:11:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.952 03:11:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.952 03:11:41 -- paths/export.sh@5 -- $ export PATH 00:02:53.952 03:11:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.952 03:11:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.952 03:11:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:53.952 03:11:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732158701.XXXXXX 00:02:53.952 03:11:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732158701.JZSg8H 00:02:53.952 03:11:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:53.952 03:11:41 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:53.953 03:11:41 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:54.212 03:11:41 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:54.212 03:11:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:54.212 03:11:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:54.212 03:11:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:54.212 03:11:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:54.212 03:11:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.212 03:11:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:54.212 03:11:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:54.212 03:11:41 -- pm/common@17 -- $ local monitor 00:02:54.212 03:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.212 03:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.212 03:11:41 -- pm/common@25 -- $ sleep 1 00:02:54.212 03:11:41 -- pm/common@21 -- $ date +%s 00:02:54.212 03:11:41 -- pm/common@21 -- $ date +%s 00:02:54.212 03:11:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732158701 00:02:54.212 03:11:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732158701 00:02:54.212 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732158701_collect-cpu-load.pm.log 00:02:54.212 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732158701_collect-vmstat.pm.log 00:02:55.148 03:11:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:55.148 03:11:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:55.148 03:11:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:55.148 03:11:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.148 03:11:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:55.148 Thu Nov 21 03:11:42 AM UTC 2024 00:02:55.148 03:11:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:55.148 v25.01-pre-219-g557f022f6 00:02:55.148 03:11:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:55.148 03:11:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:55.148 03:11:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.148 03:11:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.148 03:11:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.148 ************************************ 00:02:55.148 START TEST asan 00:02:55.148 ************************************ 00:02:55.148 using asan 00:02:55.148 03:11:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:55.148 00:02:55.148 real 0m0.000s 00:02:55.148 user 0m0.000s 00:02:55.148 sys 0m0.000s 00:02:55.148 ************************************ 00:02:55.148 END TEST asan 00:02:55.148 03:11:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.148 03:11:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.148 ************************************ 00:02:55.148 03:11:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:55.148 03:11:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:55.148 03:11:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.148 03:11:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.148 03:11:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.148 ************************************ 00:02:55.148 START TEST ubsan 00:02:55.148 ************************************ 00:02:55.148 using ubsan 00:02:55.148 03:11:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:55.148 00:02:55.148 real 0m0.000s 00:02:55.148 user 0m0.000s 00:02:55.148 sys 0m0.000s 00:02:55.148 03:11:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.148 03:11:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.148 ************************************ 00:02:55.148 END TEST ubsan 00:02:55.148 ************************************ 00:02:55.148 03:11:42 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:55.148 03:11:42 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:55.148 03:11:42 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:55.148 03:11:42 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:55.148 03:11:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.148 03:11:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.148 ************************************ 00:02:55.148 START TEST build_native_dpdk 00:02:55.148 ************************************ 00:02:55.148 03:11:42 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:55.148 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:02:55.148 0c0cd5ffb0 version: 24.11-rc3 00:02:55.148 8c9a7471a0 dts: add checksum offload test suite 00:02:55.148 bee7cf823c dts: add checksum offload to testpmd shell 00:02:55.148 2eef9a80df dts: add dynamic queue test suite 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:55.148 03:11:42 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:55.148 03:11:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:55.148 03:11:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:55.148 03:11:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:55.148 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:55.148 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:55.149 03:11:42 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:55.149 patching file config/rte_config.h 00:02:55.149 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:55.149 03:11:42 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:55.149 03:11:42 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:55.149 03:11:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.407 03:11:42 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:55.407 03:11:42 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:55.407 patching file drivers/bus/pci/linux/pci_uio.c 00:02:55.407 03:11:42 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:55.407 03:11:42 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:55.407 03:11:42 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:55.407 03:11:42 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:55.408 03:11:42 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:59.596 The Meson build system 00:02:59.596 Version: 1.5.0 00:02:59.596 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:59.596 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:59.596 Build type: native build 00:02:59.596 Project name: DPDK 00:02:59.596 Project version: 24.11.0-rc3 00:02:59.596 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.596 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:59.596 Host machine cpu family: x86_64 00:02:59.596 Host machine cpu: x86_64 00:02:59.596 Message: ## Building in Developer Mode ## 00:02:59.596 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.596 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:59.596 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.596 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:59.596 Program cat found: YES (/usr/bin/cat) 00:02:59.596 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:59.596 Compiler for C supports arguments -march=native: YES 00:02:59.596 Checking for size of "void *" : 8 00:02:59.596 Checking for size of "void *" : 8 (cached) 00:02:59.596 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:59.596 Library m found: YES 00:02:59.596 Library numa found: YES 00:02:59.596 Has header "numaif.h" : YES 00:02:59.596 Library fdt found: NO 00:02:59.596 Library execinfo found: NO 00:02:59.596 Has header "execinfo.h" : YES 00:02:59.597 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.597 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.597 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.597 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.597 Run-time dependency openssl found: YES 3.1.1 00:02:59.597 Run-time dependency libpcap found: YES 1.10.4 00:02:59.597 Has header "pcap.h" with dependency libpcap: YES 00:02:59.597 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.597 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.597 Compiler for C supports arguments -Wformat: YES 00:02:59.597 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.597 Compiler for C supports arguments -Wformat-security: NO 00:02:59.597 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.597 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.597 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.597 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.597 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.597 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.597 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.597 Compiler for C supports arguments -Wundef: YES 00:02:59.597 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.597 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.597 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.597 Program objdump found: YES (/usr/bin/objdump) 00:02:59.597 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:59.597 Checking if "AVX512 checking" compiles: YES 00:02:59.597 Fetching value of define "__AVX512F__" : 1 00:02:59.597 Fetching value of define "__AVX512BW__" : 1 00:02:59.597 Fetching value of define "__AVX512DQ__" : 1 00:02:59.597 Fetching value of define "__AVX512VL__" : 1 00:02:59.597 Fetching value of define "__SSE4_2__" : 1 00:02:59.597 Fetching value of define "__AES__" : 1 00:02:59.597 Fetching value of define "__AVX__" : 1 00:02:59.597 Fetching value of define "__AVX2__" : 1 00:02:59.597 Fetching value of define "__AVX512BW__" : 1 00:02:59.597 Fetching value of define "__AVX512CD__" : 1 00:02:59.597 Fetching value of define "__AVX512DQ__" : 1 00:02:59.597 Fetching value of define "__AVX512F__" : 1 00:02:59.597 Fetching value of define "__AVX512VL__" : 1 00:02:59.597 Fetching value of define "__PCLMUL__" : 1 00:02:59.597 Fetching value of define "__RDRND__" : 1 00:02:59.597 Fetching value of define "__RDSEED__" : 1 00:02:59.597 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:59.597 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.597 Message: lib/log: Defining dependency "log" 00:02:59.597 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.597 Message: lib/argparse: Defining dependency "argparse" 00:02:59.597 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.597 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:59.597 Checking for function "getentropy" : NO 00:02:59.597 Message: lib/eal: Defining dependency "eal" 00:02:59.597 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:59.597 Message: lib/ring: Defining dependency "ring" 00:02:59.597 Message: lib/rcu: Defining dependency "rcu" 00:02:59.597 Message: lib/mempool: Defining dependency "mempool" 00:02:59.597 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.597 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.597 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:59.597 Compiler for C supports arguments -mpclmul: YES 00:02:59.597 Compiler for C supports arguments -maes: YES 00:02:59.597 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.597 Message: lib/net: Defining dependency "net" 00:02:59.597 Message: lib/meter: Defining dependency "meter" 00:02:59.597 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.597 Message: lib/pci: Defining dependency "pci" 00:02:59.597 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.597 Message: lib/metrics: Defining dependency "metrics" 00:02:59.597 Message: lib/hash: Defining dependency "hash" 00:02:59.597 Message: lib/timer: Defining dependency "timer" 00:02:59.597 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.597 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.597 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:59.597 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.597 Message: lib/acl: Defining dependency "acl" 00:02:59.597 Message: lib/bbdev: Defining dependency "bbdev" 00:02:59.597 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:59.597 Run-time dependency libelf found: YES 0.191 00:02:59.597 Message: lib/bpf: Defining dependency "bpf" 00:02:59.597 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:59.597 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.597 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.597 Message: lib/distributor: Defining dependency "distributor" 00:02:59.597 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.597 Message: lib/efd: Defining dependency "efd" 00:02:59.597 Message: lib/eventdev: Defining dependency "eventdev" 00:02:59.597 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:59.597 Message: lib/gpudev: Defining dependency "gpudev" 00:02:59.597 Message: lib/gro: Defining dependency "gro" 00:02:59.597 Message: lib/gso: Defining dependency "gso" 00:02:59.597 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:59.597 Message: lib/jobstats: Defining dependency "jobstats" 00:02:59.597 Message: lib/latencystats: Defining dependency "latencystats" 00:02:59.597 Message: lib/lpm: Defining dependency "lpm" 00:02:59.597 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.597 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.597 Fetching value of define "__AVX512IFMA__" : 1 00:02:59.597 Message: lib/member: Defining dependency "member" 00:02:59.597 Message: lib/pcapng: Defining dependency "pcapng" 00:02:59.597 Message: lib/power: Defining dependency "power" 00:02:59.597 Message: lib/rawdev: Defining dependency "rawdev" 00:02:59.597 Message: lib/regexdev: Defining dependency "regexdev" 00:02:59.597 Message: lib/mldev: Defining dependency "mldev" 00:02:59.597 Message: lib/rib: Defining dependency "rib" 00:02:59.597 Message: lib/reorder: Defining dependency "reorder" 00:02:59.597 Message: lib/sched: Defining dependency "sched" 00:02:59.597 Message: lib/security: Defining dependency "security" 00:02:59.597 Message: lib/stack: Defining dependency "stack" 00:02:59.597 Has header "linux/userfaultfd.h" : YES 00:02:59.597 Message: lib/vhost: Defining dependency "vhost" 00:02:59.597 Message: lib/ipsec: Defining dependency "ipsec" 00:02:59.597 Message: lib/pdcp: Defining dependency "pdcp" 00:02:59.597 Message: lib/fib: Defining dependency "fib" 00:02:59.597 Message: lib/port: Defining dependency "port" 00:02:59.597 Message: lib/pdump: Defining dependency "pdump" 00:02:59.597 Message: lib/table: Defining dependency "table" 00:02:59.597 Message: lib/pipeline: Defining dependency "pipeline" 00:02:59.597 Message: lib/graph: Defining dependency "graph" 00:02:59.597 Message: lib/node: Defining dependency "node" 00:02:59.597 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.597 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.597 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.597 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.597 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.597 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.597 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.597 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.597 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.597 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:59.597 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:59.597 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:59.597 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:59.597 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:59.597 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:03:00.179 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:03:00.179 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:03:00.179 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:03:00.179 Has header "sys/epoll.h" : YES 00:03:00.179 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:00.179 Configuring doxy-api-html.conf using configuration 00:03:00.179 Configuring doxy-api-man.conf using configuration 00:03:00.179 Program mandb found: YES (/usr/bin/mandb) 00:03:00.179 Program sphinx-build found: NO 00:03:00.179 Program sphinx-build found: NO 00:03:00.179 Configuring rte_build_config.h using configuration 00:03:00.179 Message: 00:03:00.179 ================= 00:03:00.180 Applications Enabled 00:03:00.180 ================= 00:03:00.180 00:03:00.180 apps: 00:03:00.180 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:00.180 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:00.180 test-pmd, test-regex, test-sad, test-security-perf, 00:03:00.180 00:03:00.180 Message: 00:03:00.180 ================= 00:03:00.180 Libraries Enabled 00:03:00.180 ================= 00:03:00.180 00:03:00.180 libs: 00:03:00.180 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:00.180 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:00.180 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:00.180 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:00.180 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:00.180 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:00.180 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:00.180 graph, node, 00:03:00.180 00:03:00.180 Message: 00:03:00.180 =============== 00:03:00.180 Drivers Enabled 00:03:00.180 =============== 00:03:00.180 00:03:00.180 common: 00:03:00.180 00:03:00.180 bus: 00:03:00.180 pci, vdev, 00:03:00.180 mempool: 00:03:00.180 ring, 00:03:00.180 dma: 00:03:00.180 00:03:00.180 net: 00:03:00.180 i40e, 00:03:00.180 raw: 00:03:00.180 00:03:00.180 crypto: 00:03:00.180 00:03:00.180 compress: 00:03:00.180 00:03:00.180 regex: 00:03:00.180 00:03:00.180 ml: 00:03:00.180 00:03:00.180 vdpa: 00:03:00.180 00:03:00.180 event: 00:03:00.180 00:03:00.180 baseband: 00:03:00.180 00:03:00.180 gpu: 00:03:00.180 00:03:00.180 power: 00:03:00.180 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:03:00.180 00:03:00.180 Message: 00:03:00.180 ================= 00:03:00.180 Content Skipped 00:03:00.180 ================= 00:03:00.180 00:03:00.180 apps: 00:03:00.180 00:03:00.180 libs: 00:03:00.180 00:03:00.180 drivers: 00:03:00.180 common/cpt: not in enabled drivers build config 00:03:00.180 common/dpaax: not in enabled drivers build config 00:03:00.180 common/iavf: not in enabled drivers build config 00:03:00.180 common/idpf: not in enabled drivers build config 00:03:00.180 common/ionic: not in enabled drivers build config 00:03:00.180 common/mvep: not in enabled drivers build config 00:03:00.180 common/octeontx: not in enabled drivers build config 00:03:00.180 bus/auxiliary: not in enabled drivers build config 00:03:00.180 bus/cdx: not in enabled drivers build config 00:03:00.180 bus/dpaa: not in enabled drivers build config 00:03:00.180 bus/fslmc: not in enabled drivers build config 00:03:00.180 bus/ifpga: not in enabled drivers build config 00:03:00.180 bus/platform: not in enabled drivers build config 00:03:00.180 bus/uacce: not in enabled drivers build config 00:03:00.180 bus/vmbus: not in enabled drivers build config 00:03:00.180 common/cnxk: not in enabled drivers build config 00:03:00.180 common/mlx5: not in enabled drivers build config 00:03:00.180 common/nfp: not in enabled drivers build config 00:03:00.180 common/nitrox: not in enabled drivers build config 00:03:00.180 common/qat: not in enabled drivers build config 00:03:00.180 common/sfc_efx: not in enabled drivers build config 00:03:00.180 mempool/bucket: not in enabled drivers build config 00:03:00.180 mempool/cnxk: not in enabled drivers build config 00:03:00.180 mempool/dpaa: not in enabled drivers build config 00:03:00.180 mempool/dpaa2: not in enabled drivers build config 00:03:00.180 mempool/octeontx: not in enabled drivers build config 00:03:00.180 mempool/stack: not in enabled drivers build config 00:03:00.180 dma/cnxk: not in enabled drivers build config 00:03:00.180 dma/dpaa: not in enabled drivers build config 00:03:00.180 dma/dpaa2: not in enabled drivers build config 00:03:00.180 dma/hisilicon: not in enabled drivers build config 00:03:00.180 dma/idxd: not in enabled drivers build config 00:03:00.180 dma/ioat: not in enabled drivers build config 00:03:00.180 dma/odm: not in enabled drivers build config 00:03:00.180 dma/skeleton: not in enabled drivers build config 00:03:00.180 net/af_packet: not in enabled drivers build config 00:03:00.180 net/af_xdp: not in enabled drivers build config 00:03:00.180 net/ark: not in enabled drivers build config 00:03:00.180 net/atlantic: not in enabled drivers build config 00:03:00.180 net/avp: not in enabled drivers build config 00:03:00.180 net/axgbe: not in enabled drivers build config 00:03:00.180 net/bnx2x: not in enabled drivers build config 00:03:00.180 net/bnxt: not in enabled drivers build config 00:03:00.180 net/bonding: not in enabled drivers build config 00:03:00.180 net/cnxk: not in enabled drivers build config 00:03:00.180 net/cpfl: not in enabled drivers build config 00:03:00.180 net/cxgbe: not in enabled drivers build config 00:03:00.180 net/dpaa: not in enabled drivers build config 00:03:00.180 net/dpaa2: not in enabled drivers build config 00:03:00.180 net/e1000: not in enabled drivers build config 00:03:00.180 net/ena: not in enabled drivers build config 00:03:00.180 net/enetc: not in enabled drivers build config 00:03:00.180 net/enetfec: not in enabled drivers build config 00:03:00.180 net/enic: not in enabled drivers build config 00:03:00.180 net/failsafe: not in enabled drivers build config 00:03:00.180 net/fm10k: not in enabled drivers build config 00:03:00.180 net/gve: not in enabled drivers build config 00:03:00.180 net/hinic: not in enabled drivers build config 00:03:00.180 net/hns3: not in enabled drivers build config 00:03:00.180 net/iavf: not in enabled drivers build config 00:03:00.180 net/ice: not in enabled drivers build config 00:03:00.180 net/idpf: not in enabled drivers build config 00:03:00.180 net/igc: not in enabled drivers build config 00:03:00.180 net/ionic: not in enabled drivers build config 00:03:00.180 net/ipn3ke: not in enabled drivers build config 00:03:00.180 net/ixgbe: not in enabled drivers build config 00:03:00.180 net/mana: not in enabled drivers build config 00:03:00.180 net/memif: not in enabled drivers build config 00:03:00.180 net/mlx4: not in enabled drivers build config 00:03:00.181 net/mlx5: not in enabled drivers build config 00:03:00.181 net/mvneta: not in enabled drivers build config 00:03:00.181 net/mvpp2: not in enabled drivers build config 00:03:00.181 net/netvsc: not in enabled drivers build config 00:03:00.181 net/nfb: not in enabled drivers build config 00:03:00.181 net/nfp: not in enabled drivers build config 00:03:00.181 net/ngbe: not in enabled drivers build config 00:03:00.181 net/ntnic: not in enabled drivers build config 00:03:00.181 net/null: not in enabled drivers build config 00:03:00.181 net/octeontx: not in enabled drivers build config 00:03:00.181 net/octeon_ep: not in enabled drivers build config 00:03:00.181 net/pcap: not in enabled drivers build config 00:03:00.181 net/pfe: not in enabled drivers build config 00:03:00.181 net/qede: not in enabled drivers build config 00:03:00.181 net/r8169: not in enabled drivers build config 00:03:00.181 net/ring: not in enabled drivers build config 00:03:00.181 net/sfc: not in enabled drivers build config 00:03:00.181 net/softnic: not in enabled drivers build config 00:03:00.181 net/tap: not in enabled drivers build config 00:03:00.181 net/thunderx: not in enabled drivers build config 00:03:00.181 net/txgbe: not in enabled drivers build config 00:03:00.181 net/vdev_netvsc: not in enabled drivers build config 00:03:00.181 net/vhost: not in enabled drivers build config 00:03:00.181 net/virtio: not in enabled drivers build config 00:03:00.181 net/vmxnet3: not in enabled drivers build config 00:03:00.181 net/zxdh: not in enabled drivers build config 00:03:00.181 raw/cnxk_bphy: not in enabled drivers build config 00:03:00.181 raw/cnxk_gpio: not in enabled drivers build config 00:03:00.181 raw/cnxk_rvu_lf: not in enabled drivers build config 00:03:00.181 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:00.181 raw/gdtc: not in enabled drivers build config 00:03:00.181 raw/ifpga: not in enabled drivers build config 00:03:00.181 raw/ntb: not in enabled drivers build config 00:03:00.181 raw/skeleton: not in enabled drivers build config 00:03:00.181 crypto/armv8: not in enabled drivers build config 00:03:00.181 crypto/bcmfs: not in enabled drivers build config 00:03:00.181 crypto/caam_jr: not in enabled drivers build config 00:03:00.181 crypto/ccp: not in enabled drivers build config 00:03:00.181 crypto/cnxk: not in enabled drivers build config 00:03:00.181 crypto/dpaa_sec: not in enabled drivers build config 00:03:00.181 crypto/dpaa2_sec: not in enabled drivers build config 00:03:00.181 crypto/ionic: not in enabled drivers build config 00:03:00.181 crypto/ipsec_mb: not in enabled drivers build config 00:03:00.181 crypto/mlx5: not in enabled drivers build config 00:03:00.181 crypto/mvsam: not in enabled drivers build config 00:03:00.181 crypto/nitrox: not in enabled drivers build config 00:03:00.181 crypto/null: not in enabled drivers build config 00:03:00.181 crypto/octeontx: not in enabled drivers build config 00:03:00.181 crypto/openssl: not in enabled drivers build config 00:03:00.181 crypto/scheduler: not in enabled drivers build config 00:03:00.181 crypto/uadk: not in enabled drivers build config 00:03:00.181 crypto/virtio: not in enabled drivers build config 00:03:00.181 compress/isal: not in enabled drivers build config 00:03:00.181 compress/mlx5: not in enabled drivers build config 00:03:00.181 compress/nitrox: not in enabled drivers build config 00:03:00.181 compress/octeontx: not in enabled drivers build config 00:03:00.181 compress/uadk: not in enabled drivers build config 00:03:00.181 compress/zlib: not in enabled drivers build config 00:03:00.181 regex/mlx5: not in enabled drivers build config 00:03:00.181 regex/cn9k: not in enabled drivers build config 00:03:00.181 ml/cnxk: not in enabled drivers build config 00:03:00.181 vdpa/ifc: not in enabled drivers build config 00:03:00.181 vdpa/mlx5: not in enabled drivers build config 00:03:00.181 vdpa/nfp: not in enabled drivers build config 00:03:00.181 vdpa/sfc: not in enabled drivers build config 00:03:00.181 event/cnxk: not in enabled drivers build config 00:03:00.181 event/dlb2: not in enabled drivers build config 00:03:00.181 event/dpaa: not in enabled drivers build config 00:03:00.181 event/dpaa2: not in enabled drivers build config 00:03:00.181 event/dsw: not in enabled drivers build config 00:03:00.181 event/opdl: not in enabled drivers build config 00:03:00.181 event/skeleton: not in enabled drivers build config 00:03:00.181 event/sw: not in enabled drivers build config 00:03:00.181 event/octeontx: not in enabled drivers build config 00:03:00.181 baseband/acc: not in enabled drivers build config 00:03:00.181 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:00.181 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:00.181 baseband/la12xx: not in enabled drivers build config 00:03:00.181 baseband/null: not in enabled drivers build config 00:03:00.181 baseband/turbo_sw: not in enabled drivers build config 00:03:00.181 gpu/cuda: not in enabled drivers build config 00:03:00.181 power/amd_uncore: not in enabled drivers build config 00:03:00.181 00:03:00.181 00:03:00.181 Message: DPDK build config complete: 00:03:00.181 source path = "/home/vagrant/spdk_repo/dpdk" 00:03:00.181 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:03:00.181 Build targets in project: 244 00:03:00.181 00:03:00.181 DPDK 24.11.0-rc3 00:03:00.181 00:03:00.181 User defined options 00:03:00.181 libdir : lib 00:03:00.181 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:00.181 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:00.181 c_link_args : 00:03:00.181 enable_docs : false 00:03:00.181 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:00.181 enable_kmods : false 00:03:01.129 machine : native 00:03:01.129 tests : false 00:03:01.129 00:03:01.129 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.129 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:01.129 03:11:48 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:01.129 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:01.129 [1/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:03:01.387 [2/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:03:01.387 [3/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:03:01.387 [4/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:03:01.387 [5/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.387 [6/764] Linking static target lib/librte_kvargs.a 00:03:01.387 [7/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:01.387 [8/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:01.387 [9/764] Linking static target lib/librte_log.a 00:03:01.387 [10/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:01.387 [11/764] Linking static target lib/librte_argparse.a 00:03:01.646 [12/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.646 [13/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:01.646 [14/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.646 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:01.646 [16/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:01.646 [17/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:01.646 [18/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:01.646 [19/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:01.646 [20/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.904 [21/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:01.904 [22/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.904 [23/764] Linking target lib/librte_log.so.25.0 00:03:01.904 [24/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:01.904 [25/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:01.904 [26/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:03:01.904 [27/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:01.904 [28/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.163 [29/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:02.163 [30/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.163 [31/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.163 [32/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:02.163 [33/764] Linking static target lib/librte_telemetry.a 00:03:02.163 [34/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.163 [35/764] Linking target lib/librte_kvargs.so.25.0 00:03:02.163 [36/764] Linking target lib/librte_argparse.so.25.0 00:03:02.163 [37/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.163 [38/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:02.422 [39/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:02.422 [40/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.422 [41/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.422 [42/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.422 [43/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.422 [44/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.422 [45/764] Linking target lib/librte_telemetry.so.25.0 00:03:02.681 [46/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:02.681 [47/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.681 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:03:02.681 [49/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.681 [50/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:02.681 [51/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:02.681 [52/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.681 [53/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.681 [54/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.681 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.938 [56/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.938 [57/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.938 [58/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.939 [59/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:03.196 [60/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.196 [61/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.196 [62/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.196 [63/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.196 [64/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.196 [65/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:03.196 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.196 [67/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.454 [68/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:03.454 [69/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.454 [70/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.454 [71/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:03.454 [72/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.454 [73/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.454 [74/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:03.454 [75/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:03.454 [76/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:03.711 [77/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:03.711 [78/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.711 [79/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:03.711 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:03.711 [81/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.711 [82/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.969 [83/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.969 [84/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:03.969 [85/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.969 [86/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.969 [87/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.969 [88/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:03.969 [89/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.969 [90/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.969 [91/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:04.227 [92/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.227 [93/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.227 [94/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.227 [95/764] Linking static target lib/librte_ring.a 00:03:04.227 [96/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.485 [97/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.485 [98/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.485 [99/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.485 [100/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.485 [101/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.485 [102/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.485 [103/764] Linking static target lib/librte_eal.a 00:03:04.485 [104/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.485 [105/764] Linking static target lib/librte_mempool.a 00:03:04.743 [106/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.743 [107/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.743 [108/764] Linking static target lib/librte_rcu.a 00:03:04.743 [109/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.743 [110/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.743 [111/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.743 [112/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.001 [113/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.001 [114/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.001 [115/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:05.001 [116/764] Linking static target lib/librte_meter.a 00:03:05.001 [117/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.001 [118/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:05.001 [119/764] Linking static target lib/librte_net.a 00:03:05.001 [120/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.001 [121/764] Linking static target lib/librte_mbuf.a 00:03:05.258 [122/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:05.258 [123/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:05.258 [124/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.258 [125/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.258 [126/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:05.258 [127/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:05.515 [128/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:05.515 [129/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.773 [130/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:05.773 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.030 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.030 [133/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:06.030 [134/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.030 [135/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.030 [136/764] Linking static target lib/librte_pci.a 00:03:06.031 [137/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.288 [138/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.288 [139/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:06.288 [140/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.288 [141/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:06.288 [142/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.288 [143/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.288 [144/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:06.288 [145/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:06.288 [146/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:06.288 [147/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:06.288 [148/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:06.288 [149/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:06.288 [150/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:06.288 [151/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:06.545 [152/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:06.545 [153/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:06.545 [154/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:06.545 [155/764] Linking static target lib/librte_cmdline.a 00:03:06.545 [156/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:06.803 [157/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:06.803 [158/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:06.803 [159/764] Linking static target lib/librte_metrics.a 00:03:06.803 [160/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:06.803 [161/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:07.061 [162/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.061 [163/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.061 [164/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.061 [165/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:03:07.318 [166/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.576 [167/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:07.576 [168/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.576 [169/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:07.576 [170/764] Linking static target lib/librte_timer.a 00:03:07.576 [171/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:07.576 [172/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:07.833 [173/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.144 [174/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:08.144 [175/764] Linking static target lib/librte_bitratestats.a 00:03:08.144 [176/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:08.144 [177/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.144 [178/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:08.144 [179/764] Linking static target lib/librte_hash.a 00:03:08.460 [180/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:08.460 [181/764] Linking static target lib/librte_bbdev.a 00:03:08.460 [182/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:08.460 [183/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:08.460 [184/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:08.460 [185/764] Linking static target lib/acl/libavx2_tmp.a 00:03:08.719 [186/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.719 [187/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:08.719 [188/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:08.719 [189/764] Linking static target lib/librte_ethdev.a 00:03:08.719 [190/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:08.719 [191/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:08.719 [192/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.719 [193/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:08.978 [194/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:08.978 [195/764] Linking static target lib/librte_cfgfile.a 00:03:09.236 [196/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:09.236 [197/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.237 [198/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:09.237 [199/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.237 [200/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.237 [201/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:09.237 [202/764] Linking target lib/librte_eal.so.25.0 00:03:09.237 [203/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:09.237 [204/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:09.237 [205/764] Linking target lib/librte_ring.so.25.0 00:03:09.494 [206/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:09.494 [207/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:09.494 [208/764] Linking target lib/librte_meter.so.25.0 00:03:09.494 [209/764] Linking target lib/librte_pci.so.25.0 00:03:09.494 [210/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:09.494 [211/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:09.494 [212/764] Linking target lib/librte_rcu.so.25.0 00:03:09.494 [213/764] Linking target lib/librte_mempool.so.25.0 00:03:09.494 [214/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:09.494 [215/764] Linking target lib/librte_timer.so.25.0 00:03:09.494 [216/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:09.494 [217/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:09.494 [218/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:09.494 [219/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:09.494 [220/764] Linking static target lib/librte_acl.a 00:03:09.494 [221/764] Linking static target lib/librte_bpf.a 00:03:09.494 [222/764] Linking target lib/librte_mbuf.so.25.0 00:03:09.494 [223/764] Linking target lib/librte_cfgfile.so.25.0 00:03:09.494 [224/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:09.753 [225/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:09.753 [226/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.753 [227/764] Linking target lib/librte_net.so.25.0 00:03:09.753 [228/764] Linking target lib/librte_bbdev.so.25.0 00:03:09.753 [229/764] Linking static target lib/librte_compressdev.a 00:03:09.753 [230/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.753 [231/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.753 [232/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:09.753 [233/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.753 [234/764] Linking target lib/librte_cmdline.so.25.0 00:03:09.753 [235/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:09.753 [236/764] Linking target lib/librte_acl.so.25.0 00:03:09.753 [237/764] Linking target lib/librte_hash.so.25.0 00:03:09.753 [238/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:10.011 [239/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:10.011 [240/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:10.011 [241/764] Linking static target lib/librte_distributor.a 00:03:10.011 [242/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:10.011 [243/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.011 [244/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.270 [245/764] Linking target lib/librte_compressdev.so.25.0 00:03:10.270 [246/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:10.270 [247/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.270 [248/764] Linking target lib/librte_distributor.so.25.0 00:03:10.528 [249/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:10.528 [250/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.528 [251/764] Linking static target lib/librte_dmadev.a 00:03:10.528 [252/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:10.786 [253/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:10.786 [254/764] Linking static target lib/librte_efd.a 00:03:10.786 [255/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.786 [256/764] Linking target lib/librte_dmadev.so.25.0 00:03:10.786 [257/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:11.043 [258/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.044 [259/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:11.044 [260/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:11.044 [261/764] Linking target lib/librte_efd.so.25.0 00:03:11.044 [262/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.044 [263/764] Linking static target lib/librte_cryptodev.a 00:03:11.302 [264/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:11.302 [265/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:11.302 [266/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:11.302 [267/764] Linking static target lib/librte_dispatcher.a 00:03:11.302 [268/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:11.561 [269/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:11.561 [270/764] Linking static target lib/librte_gpudev.a 00:03:11.561 [271/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:11.561 [272/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.818 [273/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:11.818 [274/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:11.818 [275/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:11.818 [276/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:11.818 [277/764] Linking static target lib/librte_gro.a 00:03:12.076 [278/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.076 [279/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.076 [280/764] Linking target lib/librte_cryptodev.so.25.0 00:03:12.076 [281/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:12.076 [282/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:12.076 [283/764] Linking target lib/librte_gpudev.so.25.0 00:03:12.076 [284/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:12.076 [285/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:12.076 [286/764] Linking static target lib/librte_eventdev.a 00:03:12.076 [287/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:12.076 [288/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:12.076 [289/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.333 [290/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:12.333 [291/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:12.333 [292/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:12.333 [293/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:12.333 [294/764] Linking static target lib/librte_gso.a 00:03:12.333 [295/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:12.591 [296/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:12.591 [297/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:12.591 [298/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.591 [299/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.591 [300/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:12.591 [301/764] Linking static target lib/librte_jobstats.a 00:03:12.591 [302/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:12.591 [303/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:12.591 [304/764] Linking target lib/librte_ethdev.so.25.0 00:03:12.591 [305/764] Linking static target lib/librte_ip_frag.a 00:03:12.849 [306/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:12.849 [307/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:12.849 [308/764] Linking static target lib/librte_latencystats.a 00:03:12.849 [309/764] Linking target lib/librte_metrics.so.25.0 00:03:12.849 [310/764] Linking target lib/librte_bpf.so.25.0 00:03:12.849 [311/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.849 [312/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:12.849 [313/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:12.849 [314/764] Linking target lib/librte_gro.so.25.0 00:03:12.849 [315/764] Linking target lib/librte_jobstats.so.25.0 00:03:12.849 [316/764] Linking target lib/librte_gso.so.25.0 00:03:12.849 [317/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:12.849 [318/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:12.849 [319/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.849 [320/764] Linking target lib/librte_bitratestats.so.25.0 00:03:13.106 [321/764] Linking target lib/librte_ip_frag.so.25.0 00:03:13.106 [322/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.106 [323/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:13.106 [324/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.106 [325/764] Linking target lib/librte_latencystats.so.25.0 00:03:13.106 [326/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:03:13.106 [327/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:13.106 [328/764] Linking static target lib/librte_lpm.a 00:03:13.364 [329/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:13.364 [330/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:13.364 [331/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:03:13.364 [332/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:13.364 [333/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:13.364 [334/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.364 [335/764] Linking static target lib/librte_pcapng.a 00:03:13.364 [336/764] Linking target lib/librte_lpm.so.25.0 00:03:13.623 [337/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:13.623 [338/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:13.623 [339/764] Linking static target lib/librte_power.a 00:03:13.623 [340/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.623 [341/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:13.623 [342/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.623 [343/764] Linking target lib/librte_pcapng.so.25.0 00:03:13.623 [344/764] Linking target lib/librte_eventdev.so.25.0 00:03:13.623 [345/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:13.623 [346/764] Linking static target lib/librte_rawdev.a 00:03:13.881 [347/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:13.881 [348/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:13.881 [349/764] Linking target lib/librte_dispatcher.so.25.0 00:03:13.881 [350/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:13.881 [351/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:13.881 [352/764] Linking static target lib/librte_regexdev.a 00:03:13.881 [353/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:13.881 [354/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:14.141 [355/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:14.141 [356/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.141 [357/764] Linking target lib/librte_rawdev.so.25.0 00:03:14.141 [358/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:14.141 [359/764] Linking static target lib/librte_mldev.a 00:03:14.141 [360/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.141 [361/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:14.141 [362/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:14.141 [363/764] Linking static target lib/librte_member.a 00:03:14.141 [364/764] Linking target lib/librte_power.so.25.0 00:03:14.141 [365/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:14.436 [366/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:14.436 [367/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:14.436 [368/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:14.436 [369/764] Linking static target lib/librte_rib.a 00:03:14.436 [370/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.436 [371/764] Linking target lib/librte_regexdev.so.25.0 00:03:14.436 [372/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.436 [373/764] Linking target lib/librte_member.so.25.0 00:03:14.436 [374/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:14.436 [375/764] Linking static target lib/librte_reorder.a 00:03:14.436 [376/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:14.715 [377/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:14.715 [378/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:14.715 [379/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:14.715 [380/764] Linking static target lib/librte_stack.a 00:03:14.715 [381/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.715 [382/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.715 [383/764] Linking target lib/librte_rib.so.25.0 00:03:14.715 [384/764] Linking target lib/librte_reorder.so.25.0 00:03:14.715 [385/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.715 [386/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:14.715 [387/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.715 [388/764] Linking static target lib/librte_security.a 00:03:14.975 [389/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:14.975 [390/764] Linking target lib/librte_stack.so.25.0 00:03:14.975 [391/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:14.975 [392/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:14.975 [393/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:14.975 [394/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.235 [395/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.235 [396/764] Linking target lib/librte_security.so.25.0 00:03:15.235 [397/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:15.235 [398/764] Linking static target lib/librte_sched.a 00:03:15.235 [399/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.235 [400/764] Linking target lib/librte_mldev.so.25.0 00:03:15.235 [401/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:15.495 [402/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:15.496 [403/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.496 [404/764] Linking target lib/librte_sched.so.25.0 00:03:15.496 [405/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:15.496 [406/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:15.754 [407/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:15.754 [408/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:15.754 [409/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.014 [410/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:16.014 [411/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:16.014 [412/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:16.273 [413/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.273 [414/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:16.273 [415/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:16.273 [416/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:16.273 [417/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:16.533 [418/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:16.533 [419/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:16.533 [420/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:16.533 [421/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:16.533 [422/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:16.533 [423/764] Linking static target lib/librte_ipsec.a 00:03:16.793 [424/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:16.793 [425/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.793 [426/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:16.793 [427/764] Linking target lib/librte_ipsec.so.25.0 00:03:17.051 [428/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:17.051 [429/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:17.309 [430/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:17.309 [431/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:17.309 [432/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:17.309 [433/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:17.309 [434/764] Linking static target lib/librte_fib.a 00:03:17.309 [435/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:17.309 [436/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:17.309 [437/764] Linking static target lib/librte_pdcp.a 00:03:17.567 [438/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:17.567 [439/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.567 [440/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.567 [441/764] Linking target lib/librte_fib.so.25.0 00:03:17.567 [442/764] Linking target lib/librte_pdcp.so.25.0 00:03:17.826 [443/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:17.826 [444/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:17.826 [445/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:17.826 [446/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:17.826 [447/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:18.084 [448/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:18.084 [449/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:18.342 [450/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:18.342 [451/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:18.342 [452/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:18.342 [453/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:18.342 [454/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:18.342 [455/764] Linking static target lib/librte_port.a 00:03:18.342 [456/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:18.601 [457/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:18.601 [458/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:18.601 [459/764] Linking static target lib/librte_pdump.a 00:03:18.601 [460/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:18.601 [461/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:18.601 [462/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.859 [463/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.859 [464/764] Linking target lib/librte_pdump.so.25.0 00:03:18.859 [465/764] Linking target lib/librte_port.so.25.0 00:03:18.859 [466/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:18.859 [467/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:18.859 [468/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:18.859 [469/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:18.859 [470/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:19.117 [471/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:19.117 [472/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:19.117 [473/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:19.375 [474/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:19.375 [475/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:19.375 [476/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:19.375 [477/764] Linking static target lib/librte_table.a 00:03:19.375 [478/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:19.633 [479/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:19.634 [480/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:19.634 [481/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.892 [482/764] Linking target lib/librte_table.so.25.0 00:03:19.892 [483/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:19.892 [484/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:19.892 [485/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:19.892 [486/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:19.892 [487/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:20.149 [488/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:20.149 [489/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:20.149 [490/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:20.149 [491/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:20.407 [492/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:20.407 [493/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:20.407 [494/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:20.407 [495/764] Linking static target lib/librte_graph.a 00:03:20.665 [496/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:20.665 [497/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:20.665 [498/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:20.665 [499/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:20.925 [500/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:20.925 [501/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.925 [502/764] Linking target lib/librte_graph.so.25.0 00:03:20.925 [503/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:21.185 [504/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:21.185 [505/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:21.185 [506/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:21.185 [507/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:21.185 [508/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:21.443 [509/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:21.443 [510/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.443 [511/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:21.443 [512/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:21.443 [513/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:21.443 [514/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:21.701 [515/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.701 [516/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:21.701 [517/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.701 [518/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:21.701 [519/764] Linking static target lib/librte_node.a 00:03:21.701 [520/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:21.959 [521/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:21.959 [522/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.959 [523/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:21.959 [524/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:21.959 [525/764] Linking target lib/librte_node.so.25.0 00:03:21.959 [526/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.218 [527/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.218 [528/764] Linking static target drivers/librte_bus_vdev.a 00:03:22.218 [529/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.218 [530/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.218 [531/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:22.218 [532/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.218 [533/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.218 [534/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:22.218 [535/764] Linking target drivers/librte_bus_vdev.so.25.0 00:03:22.218 [536/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.218 [537/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:22.218 [538/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.218 [539/764] Linking static target drivers/librte_bus_pci.a 00:03:22.218 [540/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.476 [541/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:22.476 [542/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.476 [543/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.476 [544/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:22.476 [545/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.476 [546/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.805 [547/764] Linking static target drivers/librte_mempool_ring.a 00:03:22.805 [548/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.805 [549/764] Linking target drivers/librte_mempool_ring.so.25.0 00:03:22.805 [550/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.805 [551/764] Linking target drivers/librte_bus_pci.so.25.0 00:03:22.805 [552/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:22.805 [553/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:23.077 [554/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:23.077 [555/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:23.077 [556/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:23.644 [557/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:23.903 [558/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:23.903 [559/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:23.903 [560/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:23.903 [561/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:23.903 [562/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:24.161 [563/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:24.161 [564/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:24.161 [565/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:24.418 [566/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:24.418 [567/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:24.688 [568/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:24.688 [569/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:24.688 [570/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:24.688 [571/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:24.688 [572/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:24.688 [573/764] Linking static target drivers/librte_power_acpi.a 00:03:24.688 [574/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:24.688 [575/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:24.688 [576/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:24.688 [577/764] Linking target drivers/librte_power_acpi.so.25.0 00:03:24.947 [578/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:24.947 [579/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:24.947 [580/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:24.947 [581/764] Linking static target drivers/librte_power_amd_pstate.a 00:03:24.947 [582/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:24.947 [583/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:24.947 [584/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:24.947 [585/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:24.947 [586/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:24.947 [587/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:24.947 [588/764] Linking static target drivers/librte_power_cppc.a 00:03:24.947 [589/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:25.206 [590/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:25.206 [591/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:25.206 [592/764] Linking target drivers/librte_power_cppc.so.25.0 00:03:25.206 [593/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:25.206 [594/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:25.206 [595/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:25.206 [596/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:25.206 [597/764] Linking static target drivers/librte_power_kvm_vm.a 00:03:25.206 [598/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:25.206 [599/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:25.206 [600/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:25.206 [601/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:25.206 [602/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:25.206 [603/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:25.206 [604/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:25.206 [605/764] Linking static target drivers/librte_power_intel_uncore.a 00:03:25.206 [606/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.464 [607/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:25.464 [608/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:25.464 [609/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:25.464 [610/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:25.464 [611/764] Linking static target drivers/librte_power_intel_pstate.a 00:03:25.464 [612/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.464 [613/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:25.464 [614/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:25.464 [615/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:25.464 [616/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:25.722 [617/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:25.722 [618/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:25.722 [619/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:25.722 [620/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:25.980 [621/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:25.980 [622/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:25.980 [623/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.980 [624/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:25.980 [625/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:25.980 [626/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.980 [627/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:26.239 [628/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:26.239 [629/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:26.239 [630/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:26.239 [631/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.239 [632/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.239 [633/764] Linking static target drivers/librte_net_i40e.a 00:03:26.239 [634/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:26.498 [635/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:26.498 [636/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:26.498 [637/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:26.498 [638/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:26.498 [639/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:26.498 [640/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:26.498 [641/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:26.498 [642/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:26.757 [643/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.757 [644/764] Linking target drivers/librte_net_i40e.so.25.0 00:03:26.757 [645/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:27.016 [646/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:27.016 [647/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:27.275 [648/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:27.275 [649/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:27.275 [650/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:27.275 [651/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:27.275 [652/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:27.534 [653/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:27.534 [654/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:27.793 [655/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:27.793 [656/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:27.793 [657/764] Linking static target lib/librte_vhost.a 00:03:27.793 [658/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:27.793 [659/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:27.793 [660/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:27.793 [661/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:28.052 [662/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:28.052 [663/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:28.052 [664/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:28.310 [665/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:28.310 [666/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:28.310 [667/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:28.310 [668/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:28.310 [669/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:28.569 [670/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.569 [671/764] Linking target lib/librte_vhost.so.25.0 00:03:28.569 [672/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:28.569 [673/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:28.827 [674/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:28.827 [675/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:29.394 [676/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:29.394 [677/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:29.394 [678/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:29.394 [679/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:29.394 [680/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:29.394 [681/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:29.394 [682/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:29.394 [683/764] Linking static target lib/librte_pipeline.a 00:03:29.394 [684/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:29.653 [685/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:29.653 [686/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:29.653 [687/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:29.653 [688/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:29.653 [689/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:29.653 [690/764] Linking target app/dpdk-dumpcap 00:03:29.953 [691/764] Linking target app/dpdk-graph 00:03:29.953 [692/764] Linking target app/dpdk-pdump 00:03:29.953 [693/764] Linking target app/dpdk-test-acl 00:03:29.953 [694/764] Linking target app/dpdk-proc-info 00:03:29.953 [695/764] Linking target app/dpdk-test-cmdline 00:03:29.953 [696/764] Linking target app/dpdk-test-compress-perf 00:03:30.212 [697/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:30.212 [698/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:30.212 [699/764] Linking target app/dpdk-test-dma-perf 00:03:30.212 [700/764] Linking target app/dpdk-test-crypto-perf 00:03:30.212 [701/764] Linking target app/dpdk-test-fib 00:03:30.212 [702/764] Linking target app/dpdk-test-gpudev 00:03:30.470 [703/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:30.470 [704/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:30.470 [705/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:30.470 [706/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:30.471 [707/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:30.471 [708/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:30.729 [709/764] Linking target app/dpdk-test-flow-perf 00:03:30.729 [710/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:30.729 [711/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:30.729 [712/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:30.729 [713/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:30.729 [714/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:30.987 [715/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:30.987 [716/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:30.987 [717/764] Linking target app/dpdk-test-eventdev 00:03:30.987 [718/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:31.245 [719/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:31.245 [720/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:31.245 [721/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:31.245 [722/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.245 [723/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:31.503 [724/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:31.503 [725/764] Linking target lib/librte_pipeline.so.25.0 00:03:31.503 [726/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:31.503 [727/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:31.503 [728/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:31.761 [729/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:31.761 [730/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:31.761 [731/764] Linking target app/dpdk-test-pipeline 00:03:31.761 [732/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:32.019 [733/764] Linking target app/dpdk-test-bbdev 00:03:32.019 [734/764] Linking target app/dpdk-test-mldev 00:03:32.019 [735/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:32.019 [736/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:32.276 [737/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:32.276 [738/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:32.276 [739/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:32.276 [740/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:32.534 [741/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:32.534 [742/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:32.534 [743/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:32.534 [744/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:32.534 [745/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:32.792 [746/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:32.792 [747/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:32.792 [748/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.050 [749/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:33.050 [750/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:33.050 [751/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:33.308 [752/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:33.308 [753/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:33.566 [754/764] Linking target app/dpdk-test-sad 00:03:33.566 [755/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:33.566 [756/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:33.567 [757/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:33.567 [758/764] Linking target app/dpdk-test-regex 00:03:33.567 [759/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:33.825 [760/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:33.825 [761/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:33.825 [762/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.083 [763/764] Linking target app/dpdk-test-security-perf 00:03:34.083 [764/764] Linking target app/dpdk-testpmd 00:03:34.083 03:12:21 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:34.083 03:12:21 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.083 03:12:21 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:34.343 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:34.343 [0/1] Installing files. 00:03:34.343 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:34.343 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:34.343 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:34.344 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.605 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.606 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.607 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.608 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.609 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:34.609 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:34.609 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:34.609 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:34.609 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:34.609 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.609 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.870 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.871 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:34.871 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.871 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.872 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.873 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.874 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.874 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:34.874 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:34.874 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:34.874 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:34.874 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:34.874 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:34.874 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:34.874 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:34.874 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:34.874 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:34.874 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:34.874 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:34.874 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:34.874 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:34.874 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:34.874 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:34.874 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:34.874 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:34.874 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:34.874 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:34.874 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:34.874 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:34.874 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:34.874 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:34.874 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:34.874 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:34.874 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:34.874 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:34.874 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:34.874 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:34.874 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:34.874 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:34.874 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:34.874 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:34.874 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:34.874 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:34.874 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:34.874 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:34.874 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:34.874 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:34.874 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:34.874 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:34.874 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:34.874 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:34.874 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:34.874 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:34.874 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:34.874 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:34.875 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:34.875 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:34.875 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:34.875 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:34.875 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:34.875 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:34.875 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:34.875 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:34.875 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:34.875 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:34.875 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:34.875 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:34.875 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:34.875 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:34.875 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:34.875 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:34.875 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:34.875 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:34.875 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:34.875 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:34.875 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:34.875 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:34.875 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:34.875 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:34.875 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:34.875 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:34.875 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:34.875 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:34.875 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:34.875 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:34.875 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:34.875 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:34.875 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:34.875 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:34.875 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:34.875 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:34.875 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:34.875 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:34.875 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:34.875 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:34.875 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:34.875 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:34.875 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:34.875 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:34.875 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:34.875 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:34.875 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:34.875 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:34.875 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:34.875 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:34.875 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:34.875 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:34.875 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:34.875 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:34.875 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:34.875 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:34.875 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:34.875 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:34.875 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:34.875 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:34.875 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:34.875 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:34.875 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:34.875 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:34.875 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:34.875 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:34.875 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:34.875 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:34.875 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:34.875 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:34.875 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:34.875 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:34.875 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:34.875 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:34.875 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:34.875 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:34.875 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:34.875 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:34.875 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:34.875 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:34.875 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:34.875 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:34.875 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:34.875 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:34.875 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:34.875 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:34.875 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:34.875 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:34.875 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:34.875 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:34.875 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:34.875 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:34.875 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:34.876 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:34.876 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:34.876 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:34.876 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:34.876 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:34.876 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:34.876 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:34.876 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:34.876 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:34.876 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:34.876 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:34.876 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:34.876 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:34.876 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:34.876 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:34.876 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:34.876 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:34.876 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:34.876 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:34.876 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:34.876 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:34.876 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:34.876 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:34.876 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:34.876 03:12:22 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:34.876 03:12:22 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.876 00:03:34.876 real 0m39.648s 00:03:34.876 user 4m39.767s 00:03:34.876 sys 0m41.354s 00:03:34.876 03:12:22 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.876 03:12:22 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.876 ************************************ 00:03:34.876 END TEST build_native_dpdk 00:03:34.876 ************************************ 00:03:34.876 03:12:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.876 03:12:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.876 03:12:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:34.876 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:35.134 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.134 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:35.134 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:35.392 Using 'verbs' RDMA provider 00:03:46.768 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:56.735 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:56.735 Creating mk/config.mk...done. 00:03:56.735 Creating mk/cc.flags.mk...done. 00:03:56.735 Type 'make' to build. 00:03:56.735 03:12:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:56.735 03:12:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:56.735 03:12:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:56.735 03:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.735 ************************************ 00:03:56.735 START TEST make 00:03:56.735 ************************************ 00:03:56.735 03:12:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:56.735 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:56.735 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:56.735 meson setup builddir \ 00:03:56.735 -Dwith-libaio=enabled \ 00:03:56.735 -Dwith-liburing=enabled \ 00:03:56.735 -Dwith-libvfn=disabled \ 00:03:56.735 -Dwith-spdk=disabled \ 00:03:56.735 -Dexamples=false \ 00:03:56.735 -Dtests=false \ 00:03:56.735 -Dtools=false && \ 00:03:56.735 meson compile -C builddir && \ 00:03:56.735 cd -) 00:03:56.735 make[1]: Nothing to be done for 'all'. 00:03:58.637 The Meson build system 00:03:58.638 Version: 1.5.0 00:03:58.638 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:58.638 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.638 Build type: native build 00:03:58.638 Project name: xnvme 00:03:58.638 Project version: 0.7.5 00:03:58.638 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.638 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.638 Host machine cpu family: x86_64 00:03:58.638 Host machine cpu: x86_64 00:03:58.638 Message: host_machine.system: linux 00:03:58.638 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:58.638 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:58.638 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:58.638 Run-time dependency threads found: YES 00:03:58.638 Has header "setupapi.h" : NO 00:03:58.638 Has header "linux/blkzoned.h" : YES 00:03:58.638 Has header "linux/blkzoned.h" : YES (cached) 00:03:58.638 Has header "libaio.h" : YES 00:03:58.638 Library aio found: YES 00:03:58.638 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.638 Run-time dependency liburing found: YES 2.2 00:03:58.638 Dependency libvfn skipped: feature with-libvfn disabled 00:03:58.638 Found CMake: /usr/bin/cmake (3.27.7) 00:03:58.638 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:58.638 Subproject spdk : skipped: feature with-spdk disabled 00:03:58.638 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.638 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.638 Library rt found: YES 00:03:58.638 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:58.638 Configuring xnvme_config.h using configuration 00:03:58.638 Configuring xnvme.spec using configuration 00:03:58.638 Run-time dependency bash-completion found: YES 2.11 00:03:58.638 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:58.638 Program cp found: YES (/usr/bin/cp) 00:03:58.638 Build targets in project: 3 00:03:58.638 00:03:58.638 xnvme 0.7.5 00:03:58.638 00:03:58.638 Subprojects 00:03:58.638 spdk : NO Feature 'with-spdk' disabled 00:03:58.638 00:03:58.638 User defined options 00:03:58.638 examples : false 00:03:58.638 tests : false 00:03:58.638 tools : false 00:03:58.638 with-libaio : enabled 00:03:58.638 with-liburing: enabled 00:03:58.638 with-libvfn : disabled 00:03:58.638 with-spdk : disabled 00:03:58.638 00:03:58.638 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.638 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:58.638 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:58.897 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:58.897 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:58.897 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:58.897 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:58.897 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:58.897 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:58.897 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:58.897 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:58.897 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:58.897 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:58.897 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:58.897 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:58.897 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:58.897 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:58.897 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:58.897 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:58.897 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:58.897 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:58.897 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:58.897 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:58.897 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:58.897 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:58.897 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:58.897 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:58.897 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:58.897 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:59.156 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:59.156 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:59.156 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:59.156 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:59.156 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:59.156 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:59.156 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:59.156 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:59.156 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:59.156 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:59.156 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:59.156 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:59.156 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:59.156 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:59.156 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:59.156 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:59.156 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:59.156 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:59.156 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:59.156 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:59.157 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:59.157 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:59.157 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:59.157 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:59.157 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:59.157 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:59.157 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:59.157 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:59.157 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:59.157 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:59.157 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:59.157 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:59.157 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:59.416 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:59.416 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:59.416 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:59.416 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:59.416 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:59.416 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:59.416 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:59.416 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:59.416 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:59.416 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:59.416 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:59.416 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:59.416 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:59.983 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:59.983 [75/76] Linking static target lib/libxnvme.a 00:03:59.983 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:59.983 INFO: autodetecting backend as ninja 00:03:59.983 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:59.983 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:32.044 CC lib/log/log.o 00:04:32.044 CC lib/log/log_flags.o 00:04:32.044 CC lib/log/log_deprecated.o 00:04:32.044 CC lib/ut_mock/mock.o 00:04:32.044 CC lib/ut/ut.o 00:04:32.044 LIB libspdk_log.a 00:04:32.044 LIB libspdk_ut_mock.a 00:04:32.044 LIB libspdk_ut.a 00:04:32.044 SO libspdk_log.so.7.1 00:04:32.044 SO libspdk_ut_mock.so.6.0 00:04:32.044 SO libspdk_ut.so.2.0 00:04:32.044 SYMLINK libspdk_log.so 00:04:32.044 SYMLINK libspdk_ut.so 00:04:32.044 SYMLINK libspdk_ut_mock.so 00:04:32.044 CC lib/ioat/ioat.o 00:04:32.044 CC lib/util/base64.o 00:04:32.044 CC lib/util/bit_array.o 00:04:32.044 CC lib/util/cpuset.o 00:04:32.044 CC lib/util/crc16.o 00:04:32.044 CC lib/util/crc32.o 00:04:32.044 CC lib/util/crc32c.o 00:04:32.044 CC lib/dma/dma.o 00:04:32.044 CXX lib/trace_parser/trace.o 00:04:32.044 CC lib/vfio_user/host/vfio_user_pci.o 00:04:32.044 CC lib/util/crc32_ieee.o 00:04:32.044 CC lib/vfio_user/host/vfio_user.o 00:04:32.044 CC lib/util/crc64.o 00:04:32.044 CC lib/util/dif.o 00:04:32.044 LIB libspdk_dma.a 00:04:32.044 CC lib/util/fd.o 00:04:32.044 CC lib/util/fd_group.o 00:04:32.044 CC lib/util/file.o 00:04:32.044 SO libspdk_dma.so.5.0 00:04:32.044 LIB libspdk_ioat.a 00:04:32.044 CC lib/util/hexlify.o 00:04:32.044 SO libspdk_ioat.so.7.0 00:04:32.044 SYMLINK libspdk_dma.so 00:04:32.044 CC lib/util/iov.o 00:04:32.044 CC lib/util/math.o 00:04:32.044 SYMLINK libspdk_ioat.so 00:04:32.044 CC lib/util/net.o 00:04:32.044 CC lib/util/pipe.o 00:04:32.044 LIB libspdk_vfio_user.a 00:04:32.044 CC lib/util/strerror_tls.o 00:04:32.044 CC lib/util/string.o 00:04:32.044 SO libspdk_vfio_user.so.5.0 00:04:32.044 CC lib/util/uuid.o 00:04:32.044 CC lib/util/xor.o 00:04:32.044 SYMLINK libspdk_vfio_user.so 00:04:32.044 CC lib/util/zipf.o 00:04:32.044 CC lib/util/md5.o 00:04:32.044 LIB libspdk_trace_parser.a 00:04:32.044 LIB libspdk_util.a 00:04:32.044 SO libspdk_trace_parser.so.6.0 00:04:32.044 SO libspdk_util.so.10.1 00:04:32.044 SYMLINK libspdk_trace_parser.so 00:04:32.044 SYMLINK libspdk_util.so 00:04:32.044 CC lib/conf/conf.o 00:04:32.044 CC lib/idxd/idxd.o 00:04:32.044 CC lib/env_dpdk/env.o 00:04:32.044 CC lib/idxd/idxd_kernel.o 00:04:32.044 CC lib/idxd/idxd_user.o 00:04:32.044 CC lib/env_dpdk/memory.o 00:04:32.044 CC lib/env_dpdk/pci.o 00:04:32.044 CC lib/rdma_utils/rdma_utils.o 00:04:32.044 CC lib/json/json_parse.o 00:04:32.044 CC lib/vmd/vmd.o 00:04:32.044 CC lib/vmd/led.o 00:04:32.044 LIB libspdk_conf.a 00:04:32.044 CC lib/json/json_util.o 00:04:32.044 SO libspdk_conf.so.6.0 00:04:32.044 LIB libspdk_rdma_utils.a 00:04:32.044 SO libspdk_rdma_utils.so.1.0 00:04:32.044 SYMLINK libspdk_conf.so 00:04:32.044 CC lib/json/json_write.o 00:04:32.044 CC lib/env_dpdk/init.o 00:04:32.044 CC lib/env_dpdk/threads.o 00:04:32.044 SYMLINK libspdk_rdma_utils.so 00:04:32.044 CC lib/env_dpdk/pci_ioat.o 00:04:32.302 CC lib/env_dpdk/pci_virtio.o 00:04:32.302 CC lib/env_dpdk/pci_vmd.o 00:04:32.302 CC lib/env_dpdk/pci_idxd.o 00:04:32.302 CC lib/env_dpdk/pci_event.o 00:04:32.302 LIB libspdk_json.a 00:04:32.302 CC lib/env_dpdk/sigbus_handler.o 00:04:32.302 SO libspdk_json.so.6.0 00:04:32.302 CC lib/rdma_provider/common.o 00:04:32.302 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:32.302 LIB libspdk_idxd.a 00:04:32.302 SYMLINK libspdk_json.so 00:04:32.302 SO libspdk_idxd.so.12.1 00:04:32.302 LIB libspdk_vmd.a 00:04:32.302 CC lib/env_dpdk/pci_dpdk.o 00:04:32.302 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:32.302 SO libspdk_vmd.so.6.0 00:04:32.302 SYMLINK libspdk_idxd.so 00:04:32.302 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:32.560 SYMLINK libspdk_vmd.so 00:04:32.560 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:32.560 CC lib/jsonrpc/jsonrpc_server.o 00:04:32.560 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:32.560 CC lib/jsonrpc/jsonrpc_client.o 00:04:32.560 LIB libspdk_rdma_provider.a 00:04:32.560 SO libspdk_rdma_provider.so.7.0 00:04:32.560 SYMLINK libspdk_rdma_provider.so 00:04:32.560 LIB libspdk_jsonrpc.a 00:04:32.818 SO libspdk_jsonrpc.so.6.0 00:04:32.818 SYMLINK libspdk_jsonrpc.so 00:04:33.075 LIB libspdk_env_dpdk.a 00:04:33.075 CC lib/rpc/rpc.o 00:04:33.075 SO libspdk_env_dpdk.so.15.1 00:04:33.075 SYMLINK libspdk_env_dpdk.so 00:04:33.075 LIB libspdk_rpc.a 00:04:33.335 SO libspdk_rpc.so.6.0 00:04:33.335 SYMLINK libspdk_rpc.so 00:04:33.335 CC lib/notify/notify_rpc.o 00:04:33.335 CC lib/notify/notify.o 00:04:33.335 CC lib/keyring/keyring.o 00:04:33.335 CC lib/keyring/keyring_rpc.o 00:04:33.335 CC lib/trace/trace_flags.o 00:04:33.335 CC lib/trace/trace.o 00:04:33.335 CC lib/trace/trace_rpc.o 00:04:33.593 LIB libspdk_notify.a 00:04:33.593 SO libspdk_notify.so.6.0 00:04:33.593 SYMLINK libspdk_notify.so 00:04:33.593 LIB libspdk_trace.a 00:04:33.593 LIB libspdk_keyring.a 00:04:33.593 SO libspdk_keyring.so.2.0 00:04:33.593 SO libspdk_trace.so.11.0 00:04:33.851 SYMLINK libspdk_keyring.so 00:04:33.851 SYMLINK libspdk_trace.so 00:04:33.851 CC lib/sock/sock.o 00:04:33.851 CC lib/sock/sock_rpc.o 00:04:33.851 CC lib/thread/thread.o 00:04:33.851 CC lib/thread/iobuf.o 00:04:34.417 LIB libspdk_sock.a 00:04:34.417 SO libspdk_sock.so.10.0 00:04:34.417 SYMLINK libspdk_sock.so 00:04:34.729 CC lib/nvme/nvme_ctrlr.o 00:04:34.729 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.729 CC lib/nvme/nvme_ns_cmd.o 00:04:34.729 CC lib/nvme/nvme_fabric.o 00:04:34.729 CC lib/nvme/nvme_ns.o 00:04:34.729 CC lib/nvme/nvme_qpair.o 00:04:34.729 CC lib/nvme/nvme.o 00:04:34.729 CC lib/nvme/nvme_pcie.o 00:04:34.729 CC lib/nvme/nvme_pcie_common.o 00:04:35.330 CC lib/nvme/nvme_quirks.o 00:04:35.330 LIB libspdk_thread.a 00:04:35.330 SO libspdk_thread.so.11.0 00:04:35.330 CC lib/nvme/nvme_transport.o 00:04:35.330 CC lib/nvme/nvme_discovery.o 00:04:35.330 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:35.330 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:35.330 SYMLINK libspdk_thread.so 00:04:35.330 CC lib/nvme/nvme_tcp.o 00:04:35.330 CC lib/nvme/nvme_opal.o 00:04:35.330 CC lib/nvme/nvme_io_msg.o 00:04:35.330 CC lib/nvme/nvme_poll_group.o 00:04:35.588 CC lib/nvme/nvme_zns.o 00:04:35.588 CC lib/nvme/nvme_stubs.o 00:04:35.847 CC lib/nvme/nvme_auth.o 00:04:35.847 CC lib/nvme/nvme_cuse.o 00:04:35.847 CC lib/nvme/nvme_rdma.o 00:04:35.847 CC lib/accel/accel.o 00:04:35.847 CC lib/accel/accel_rpc.o 00:04:36.104 CC lib/accel/accel_sw.o 00:04:36.104 CC lib/blob/blobstore.o 00:04:36.104 CC lib/init/json_config.o 00:04:36.104 CC lib/virtio/virtio.o 00:04:36.104 CC lib/virtio/virtio_vhost_user.o 00:04:36.362 CC lib/init/subsystem.o 00:04:36.362 CC lib/init/subsystem_rpc.o 00:04:36.362 CC lib/init/rpc.o 00:04:36.362 CC lib/virtio/virtio_vfio_user.o 00:04:36.362 CC lib/virtio/virtio_pci.o 00:04:36.362 CC lib/blob/request.o 00:04:36.362 CC lib/blob/zeroes.o 00:04:36.620 LIB libspdk_init.a 00:04:36.620 SO libspdk_init.so.6.0 00:04:36.620 SYMLINK libspdk_init.so 00:04:36.620 CC lib/blob/blob_bs_dev.o 00:04:36.620 CC lib/fsdev/fsdev.o 00:04:36.620 CC lib/fsdev/fsdev_io.o 00:04:36.620 CC lib/fsdev/fsdev_rpc.o 00:04:36.620 LIB libspdk_virtio.a 00:04:36.620 SO libspdk_virtio.so.7.0 00:04:36.620 CC lib/event/app.o 00:04:36.620 CC lib/event/reactor.o 00:04:36.620 CC lib/event/log_rpc.o 00:04:36.877 SYMLINK libspdk_virtio.so 00:04:36.877 CC lib/event/app_rpc.o 00:04:36.877 CC lib/event/scheduler_static.o 00:04:36.877 LIB libspdk_accel.a 00:04:36.877 SO libspdk_accel.so.16.0 00:04:36.877 SYMLINK libspdk_accel.so 00:04:37.135 CC lib/bdev/bdev.o 00:04:37.135 CC lib/bdev/bdev_rpc.o 00:04:37.135 CC lib/bdev/part.o 00:04:37.135 CC lib/bdev/bdev_zone.o 00:04:37.135 CC lib/bdev/scsi_nvme.o 00:04:37.135 LIB libspdk_nvme.a 00:04:37.135 LIB libspdk_fsdev.a 00:04:37.135 SO libspdk_fsdev.so.2.0 00:04:37.135 LIB libspdk_event.a 00:04:37.135 SYMLINK libspdk_fsdev.so 00:04:37.135 SO libspdk_nvme.so.15.0 00:04:37.135 SO libspdk_event.so.14.0 00:04:37.393 SYMLINK libspdk_event.so 00:04:37.393 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:37.393 SYMLINK libspdk_nvme.so 00:04:37.960 LIB libspdk_fuse_dispatcher.a 00:04:37.960 SO libspdk_fuse_dispatcher.so.1.0 00:04:37.960 SYMLINK libspdk_fuse_dispatcher.so 00:04:38.527 LIB libspdk_blob.a 00:04:38.527 SO libspdk_blob.so.11.0 00:04:38.785 SYMLINK libspdk_blob.so 00:04:39.043 CC lib/blobfs/tree.o 00:04:39.043 CC lib/blobfs/blobfs.o 00:04:39.044 CC lib/lvol/lvol.o 00:04:39.610 LIB libspdk_bdev.a 00:04:39.610 LIB libspdk_blobfs.a 00:04:39.610 SO libspdk_bdev.so.17.0 00:04:39.610 SO libspdk_blobfs.so.10.0 00:04:39.610 LIB libspdk_lvol.a 00:04:39.610 SYMLINK libspdk_bdev.so 00:04:39.610 SYMLINK libspdk_blobfs.so 00:04:39.610 SO libspdk_lvol.so.10.0 00:04:39.869 SYMLINK libspdk_lvol.so 00:04:39.869 CC lib/nvmf/ctrlr_discovery.o 00:04:39.869 CC lib/nvmf/ctrlr.o 00:04:39.869 CC lib/nvmf/ctrlr_bdev.o 00:04:39.869 CC lib/nvmf/subsystem.o 00:04:39.869 CC lib/nvmf/nvmf.o 00:04:39.869 CC lib/nvmf/nvmf_rpc.o 00:04:39.869 CC lib/ftl/ftl_core.o 00:04:39.869 CC lib/scsi/dev.o 00:04:39.869 CC lib/ublk/ublk.o 00:04:39.869 CC lib/nbd/nbd.o 00:04:39.869 CC lib/scsi/lun.o 00:04:40.127 CC lib/nbd/nbd_rpc.o 00:04:40.127 CC lib/nvmf/transport.o 00:04:40.127 CC lib/ftl/ftl_init.o 00:04:40.127 CC lib/scsi/port.o 00:04:40.386 LIB libspdk_nbd.a 00:04:40.386 SO libspdk_nbd.so.7.0 00:04:40.386 CC lib/scsi/scsi.o 00:04:40.386 SYMLINK libspdk_nbd.so 00:04:40.386 CC lib/scsi/scsi_bdev.o 00:04:40.386 CC lib/ublk/ublk_rpc.o 00:04:40.386 CC lib/ftl/ftl_layout.o 00:04:40.386 CC lib/scsi/scsi_pr.o 00:04:40.386 LIB libspdk_ublk.a 00:04:40.386 SO libspdk_ublk.so.3.0 00:04:40.386 CC lib/nvmf/tcp.o 00:04:40.644 CC lib/scsi/scsi_rpc.o 00:04:40.644 SYMLINK libspdk_ublk.so 00:04:40.644 CC lib/scsi/task.o 00:04:40.644 CC lib/nvmf/stubs.o 00:04:40.644 CC lib/ftl/ftl_debug.o 00:04:40.644 CC lib/ftl/ftl_io.o 00:04:40.644 CC lib/nvmf/mdns_server.o 00:04:40.644 CC lib/ftl/ftl_sb.o 00:04:40.644 CC lib/nvmf/rdma.o 00:04:40.903 CC lib/ftl/ftl_l2p.o 00:04:40.903 LIB libspdk_scsi.a 00:04:40.903 CC lib/nvmf/auth.o 00:04:40.903 CC lib/ftl/ftl_l2p_flat.o 00:04:40.903 CC lib/ftl/ftl_nv_cache.o 00:04:40.903 SO libspdk_scsi.so.9.0 00:04:40.903 CC lib/ftl/ftl_band.o 00:04:40.903 SYMLINK libspdk_scsi.so 00:04:40.903 CC lib/ftl/ftl_band_ops.o 00:04:41.161 CC lib/ftl/ftl_writer.o 00:04:41.161 CC lib/vhost/vhost.o 00:04:41.161 CC lib/iscsi/conn.o 00:04:41.161 CC lib/iscsi/init_grp.o 00:04:41.161 CC lib/iscsi/iscsi.o 00:04:41.161 CC lib/iscsi/param.o 00:04:41.419 CC lib/ftl/ftl_rq.o 00:04:41.419 CC lib/iscsi/portal_grp.o 00:04:41.419 CC lib/iscsi/tgt_node.o 00:04:41.419 CC lib/iscsi/iscsi_subsystem.o 00:04:41.419 CC lib/iscsi/iscsi_rpc.o 00:04:41.677 CC lib/iscsi/task.o 00:04:41.677 CC lib/ftl/ftl_reloc.o 00:04:41.677 CC lib/ftl/ftl_l2p_cache.o 00:04:41.677 CC lib/ftl/ftl_p2l.o 00:04:41.677 CC lib/ftl/ftl_p2l_log.o 00:04:41.677 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.935 CC lib/vhost/vhost_rpc.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.935 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.194 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.194 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.194 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.194 CC lib/ftl/utils/ftl_conf.o 00:04:42.194 CC lib/ftl/utils/ftl_md.o 00:04:42.194 CC lib/ftl/utils/ftl_mempool.o 00:04:42.194 CC lib/vhost/vhost_scsi.o 00:04:42.194 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.194 CC lib/ftl/utils/ftl_property.o 00:04:42.453 CC lib/vhost/vhost_blk.o 00:04:42.453 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.453 LIB libspdk_iscsi.a 00:04:42.453 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.453 SO libspdk_iscsi.so.8.0 00:04:42.453 CC lib/vhost/rte_vhost_user.o 00:04:42.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.453 SYMLINK libspdk_iscsi.so 00:04:42.453 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.711 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.711 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.711 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.711 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.711 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:42.711 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:42.711 CC lib/ftl/base/ftl_base_dev.o 00:04:42.711 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.970 CC lib/ftl/ftl_trace.o 00:04:42.970 LIB libspdk_nvmf.a 00:04:42.970 SO libspdk_nvmf.so.20.0 00:04:42.970 LIB libspdk_ftl.a 00:04:43.228 SO libspdk_ftl.so.9.0 00:04:43.228 SYMLINK libspdk_nvmf.so 00:04:43.487 SYMLINK libspdk_ftl.so 00:04:43.487 LIB libspdk_vhost.a 00:04:43.487 SO libspdk_vhost.so.8.0 00:04:43.487 SYMLINK libspdk_vhost.so 00:04:44.055 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.055 CC module/keyring/linux/keyring.o 00:04:44.055 CC module/blob/bdev/blob_bdev.o 00:04:44.055 CC module/accel/error/accel_error.o 00:04:44.055 CC module/accel/ioat/accel_ioat.o 00:04:44.055 CC module/keyring/file/keyring.o 00:04:44.055 CC module/fsdev/aio/fsdev_aio.o 00:04:44.055 CC module/accel/dsa/accel_dsa.o 00:04:44.055 CC module/sock/posix/posix.o 00:04:44.055 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.055 LIB libspdk_env_dpdk_rpc.a 00:04:44.055 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.055 CC module/keyring/linux/keyring_rpc.o 00:04:44.055 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.055 CC module/keyring/file/keyring_rpc.o 00:04:44.055 CC module/accel/error/accel_error_rpc.o 00:04:44.055 LIB libspdk_scheduler_dynamic.a 00:04:44.055 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.055 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.055 LIB libspdk_keyring_linux.a 00:04:44.055 LIB libspdk_blob_bdev.a 00:04:44.055 SO libspdk_keyring_linux.so.1.0 00:04:44.055 SO libspdk_blob_bdev.so.11.0 00:04:44.055 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.055 SYMLINK libspdk_keyring_linux.so 00:04:44.315 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.315 SYMLINK libspdk_blob_bdev.so 00:04:44.315 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.315 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.315 LIB libspdk_keyring_file.a 00:04:44.315 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.315 LIB libspdk_accel_ioat.a 00:04:44.315 LIB libspdk_accel_error.a 00:04:44.315 SO libspdk_keyring_file.so.2.0 00:04:44.315 SO libspdk_accel_ioat.so.6.0 00:04:44.315 SO libspdk_accel_error.so.2.0 00:04:44.315 SYMLINK libspdk_accel_ioat.so 00:04:44.315 SYMLINK libspdk_keyring_file.so 00:04:44.315 SYMLINK libspdk_accel_error.so 00:04:44.315 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.315 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.315 LIB libspdk_accel_dsa.a 00:04:44.315 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.315 SO libspdk_accel_dsa.so.5.0 00:04:44.315 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.589 SYMLINK libspdk_accel_dsa.so 00:04:44.589 CC module/accel/iaa/accel_iaa.o 00:04:44.589 LIB libspdk_scheduler_gscheduler.a 00:04:44.589 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.589 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.589 CC module/bdev/delay/vbdev_delay.o 00:04:44.589 CC module/bdev/gpt/gpt.o 00:04:44.589 CC module/bdev/error/vbdev_error.o 00:04:44.589 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.589 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.589 CC module/bdev/lvol/vbdev_lvol.o 00:04:44.589 CC module/bdev/malloc/bdev_malloc.o 00:04:44.589 LIB libspdk_fsdev_aio.a 00:04:44.589 SO libspdk_fsdev_aio.so.1.0 00:04:44.589 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.589 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.589 SYMLINK libspdk_fsdev_aio.so 00:04:44.589 CC module/bdev/gpt/vbdev_gpt.o 00:04:44.589 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:44.589 LIB libspdk_sock_posix.a 00:04:44.889 SO libspdk_sock_posix.so.6.0 00:04:44.889 LIB libspdk_blobfs_bdev.a 00:04:44.889 LIB libspdk_accel_iaa.a 00:04:44.889 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.889 SO libspdk_blobfs_bdev.so.6.0 00:04:44.889 SO libspdk_accel_iaa.so.3.0 00:04:44.889 LIB libspdk_bdev_delay.a 00:04:44.889 SYMLINK libspdk_sock_posix.so 00:04:44.889 CC module/bdev/null/bdev_null.o 00:04:44.890 CC module/bdev/null/bdev_null_rpc.o 00:04:44.890 SO libspdk_bdev_delay.so.6.0 00:04:44.890 SYMLINK libspdk_blobfs_bdev.so 00:04:44.890 SYMLINK libspdk_accel_iaa.so 00:04:44.890 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:44.890 SYMLINK libspdk_bdev_delay.so 00:04:44.890 LIB libspdk_bdev_error.a 00:04:44.890 SO libspdk_bdev_error.so.6.0 00:04:44.890 LIB libspdk_bdev_gpt.a 00:04:44.890 SYMLINK libspdk_bdev_error.so 00:04:44.890 SO libspdk_bdev_gpt.so.6.0 00:04:44.890 LIB libspdk_bdev_malloc.a 00:04:45.148 CC module/bdev/nvme/bdev_nvme.o 00:04:45.148 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.148 LIB libspdk_bdev_null.a 00:04:45.148 SO libspdk_bdev_malloc.so.6.0 00:04:45.148 SYMLINK libspdk_bdev_gpt.so 00:04:45.148 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.148 SO libspdk_bdev_null.so.6.0 00:04:45.148 LIB libspdk_bdev_lvol.a 00:04:45.148 CC module/bdev/raid/bdev_raid.o 00:04:45.148 CC module/bdev/xnvme/bdev_xnvme.o 00:04:45.148 SO libspdk_bdev_lvol.so.6.0 00:04:45.148 SYMLINK libspdk_bdev_malloc.so 00:04:45.148 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:45.148 CC module/bdev/split/vbdev_split.o 00:04:45.148 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.148 SYMLINK libspdk_bdev_null.so 00:04:45.148 SYMLINK libspdk_bdev_lvol.so 00:04:45.148 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.407 CC module/bdev/aio/bdev_aio.o 00:04:45.407 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.407 LIB libspdk_bdev_passthru.a 00:04:45.407 SO libspdk_bdev_passthru.so.6.0 00:04:45.407 CC module/bdev/ftl/bdev_ftl.o 00:04:45.407 LIB libspdk_bdev_xnvme.a 00:04:45.407 CC module/bdev/iscsi/bdev_iscsi.o 00:04:45.407 SO libspdk_bdev_xnvme.so.3.0 00:04:45.407 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:45.407 SYMLINK libspdk_bdev_passthru.so 00:04:45.407 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.407 LIB libspdk_bdev_zone_block.a 00:04:45.407 SYMLINK libspdk_bdev_xnvme.so 00:04:45.407 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.407 LIB libspdk_bdev_split.a 00:04:45.407 SO libspdk_bdev_zone_block.so.6.0 00:04:45.407 SO libspdk_bdev_split.so.6.0 00:04:45.665 SYMLINK libspdk_bdev_zone_block.so 00:04:45.665 CC module/bdev/raid/raid0.o 00:04:45.665 SYMLINK libspdk_bdev_split.so 00:04:45.665 CC module/bdev/raid/raid1.o 00:04:45.665 CC module/bdev/aio/bdev_aio_rpc.o 00:04:45.665 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.665 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.665 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:45.665 LIB libspdk_bdev_aio.a 00:04:45.665 CC module/bdev/raid/concat.o 00:04:45.665 SO libspdk_bdev_aio.so.6.0 00:04:45.665 SYMLINK libspdk_bdev_aio.so 00:04:45.665 CC module/bdev/nvme/nvme_rpc.o 00:04:45.665 LIB libspdk_bdev_ftl.a 00:04:45.665 LIB libspdk_bdev_iscsi.a 00:04:45.665 CC module/bdev/nvme/bdev_mdns_client.o 00:04:45.922 SO libspdk_bdev_ftl.so.6.0 00:04:45.922 SO libspdk_bdev_iscsi.so.6.0 00:04:45.922 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:45.922 SYMLINK libspdk_bdev_ftl.so 00:04:45.922 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.922 SYMLINK libspdk_bdev_iscsi.so 00:04:45.922 CC module/bdev/nvme/vbdev_opal.o 00:04:45.922 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:45.922 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:45.922 LIB libspdk_bdev_raid.a 00:04:45.922 SO libspdk_bdev_raid.so.6.0 00:04:45.922 SYMLINK libspdk_bdev_raid.so 00:04:46.182 LIB libspdk_bdev_virtio.a 00:04:46.182 SO libspdk_bdev_virtio.so.6.0 00:04:46.182 SYMLINK libspdk_bdev_virtio.so 00:04:47.562 LIB libspdk_bdev_nvme.a 00:04:47.822 SO libspdk_bdev_nvme.so.7.1 00:04:47.822 SYMLINK libspdk_bdev_nvme.so 00:04:48.081 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.081 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.081 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.081 CC module/event/subsystems/scheduler/scheduler.o 00:04:48.081 CC module/event/subsystems/sock/sock.o 00:04:48.081 CC module/event/subsystems/fsdev/fsdev.o 00:04:48.081 CC module/event/subsystems/keyring/keyring.o 00:04:48.081 CC module/event/subsystems/vmd/vmd.o 00:04:48.342 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.342 LIB libspdk_event_keyring.a 00:04:48.342 LIB libspdk_event_sock.a 00:04:48.342 LIB libspdk_event_fsdev.a 00:04:48.342 LIB libspdk_event_vhost_blk.a 00:04:48.342 SO libspdk_event_fsdev.so.1.0 00:04:48.342 SO libspdk_event_sock.so.5.0 00:04:48.342 LIB libspdk_event_vmd.a 00:04:48.342 SO libspdk_event_keyring.so.1.0 00:04:48.342 LIB libspdk_event_scheduler.a 00:04:48.342 SO libspdk_event_vhost_blk.so.3.0 00:04:48.342 SO libspdk_event_vmd.so.6.0 00:04:48.342 SO libspdk_event_scheduler.so.4.0 00:04:48.342 SYMLINK libspdk_event_vhost_blk.so 00:04:48.342 SYMLINK libspdk_event_fsdev.so 00:04:48.342 SYMLINK libspdk_event_keyring.so 00:04:48.342 SYMLINK libspdk_event_sock.so 00:04:48.342 LIB libspdk_event_iobuf.a 00:04:48.342 SYMLINK libspdk_event_vmd.so 00:04:48.342 SYMLINK libspdk_event_scheduler.so 00:04:48.342 SO libspdk_event_iobuf.so.3.0 00:04:48.342 SYMLINK libspdk_event_iobuf.so 00:04:48.604 CC module/event/subsystems/accel/accel.o 00:04:48.865 LIB libspdk_event_accel.a 00:04:48.865 SO libspdk_event_accel.so.6.0 00:04:48.865 SYMLINK libspdk_event_accel.so 00:04:49.126 CC module/event/subsystems/bdev/bdev.o 00:04:49.126 LIB libspdk_event_bdev.a 00:04:49.387 SO libspdk_event_bdev.so.6.0 00:04:49.387 SYMLINK libspdk_event_bdev.so 00:04:49.387 CC module/event/subsystems/nbd/nbd.o 00:04:49.387 CC module/event/subsystems/ublk/ublk.o 00:04:49.387 CC module/event/subsystems/scsi/scsi.o 00:04:49.387 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:49.387 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:49.647 LIB libspdk_event_nbd.a 00:04:49.647 LIB libspdk_event_ublk.a 00:04:49.647 LIB libspdk_event_scsi.a 00:04:49.647 SO libspdk_event_ublk.so.3.0 00:04:49.647 SO libspdk_event_nbd.so.6.0 00:04:49.647 LIB libspdk_event_nvmf.a 00:04:49.647 SO libspdk_event_scsi.so.6.0 00:04:49.647 SO libspdk_event_nvmf.so.6.0 00:04:49.647 SYMLINK libspdk_event_nbd.so 00:04:49.647 SYMLINK libspdk_event_ublk.so 00:04:49.647 SYMLINK libspdk_event_scsi.so 00:04:49.647 SYMLINK libspdk_event_nvmf.so 00:04:49.905 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:49.905 CC module/event/subsystems/iscsi/iscsi.o 00:04:49.905 LIB libspdk_event_vhost_scsi.a 00:04:50.163 SO libspdk_event_vhost_scsi.so.3.0 00:04:50.163 LIB libspdk_event_iscsi.a 00:04:50.163 SO libspdk_event_iscsi.so.6.0 00:04:50.163 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.163 SYMLINK libspdk_event_iscsi.so 00:04:50.163 SO libspdk.so.6.0 00:04:50.163 SYMLINK libspdk.so 00:04:50.421 CC app/spdk_nvme_identify/identify.o 00:04:50.421 CC app/trace_record/trace_record.o 00:04:50.421 CXX app/trace/trace.o 00:04:50.421 CC app/spdk_lspci/spdk_lspci.o 00:04:50.421 CC app/spdk_nvme_perf/perf.o 00:04:50.421 CC app/nvmf_tgt/nvmf_main.o 00:04:50.421 CC app/iscsi_tgt/iscsi_tgt.o 00:04:50.421 CC app/spdk_tgt/spdk_tgt.o 00:04:50.679 LINK spdk_lspci 00:04:50.679 CC test/thread/poller_perf/poller_perf.o 00:04:50.679 CC examples/util/zipf/zipf.o 00:04:50.679 LINK nvmf_tgt 00:04:50.679 LINK iscsi_tgt 00:04:50.679 LINK poller_perf 00:04:50.679 LINK spdk_trace_record 00:04:50.679 LINK spdk_tgt 00:04:50.679 LINK zipf 00:04:50.936 LINK spdk_trace 00:04:50.937 CC test/dma/test_dma/test_dma.o 00:04:50.937 CC app/spdk_nvme_discover/discovery_aer.o 00:04:50.937 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:50.937 CC app/spdk_top/spdk_top.o 00:04:50.937 CC examples/ioat/perf/perf.o 00:04:50.937 CC examples/ioat/verify/verify.o 00:04:50.937 TEST_HEADER include/spdk/accel.h 00:04:50.937 TEST_HEADER include/spdk/accel_module.h 00:04:50.937 TEST_HEADER include/spdk/assert.h 00:04:50.937 TEST_HEADER include/spdk/barrier.h 00:04:50.937 TEST_HEADER include/spdk/base64.h 00:04:50.937 TEST_HEADER include/spdk/bdev.h 00:04:50.937 TEST_HEADER include/spdk/bdev_module.h 00:04:50.937 TEST_HEADER include/spdk/bdev_zone.h 00:04:50.937 TEST_HEADER include/spdk/bit_array.h 00:04:50.937 TEST_HEADER include/spdk/bit_pool.h 00:04:50.937 TEST_HEADER include/spdk/blob_bdev.h 00:04:50.937 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:50.937 TEST_HEADER include/spdk/blobfs.h 00:04:50.937 TEST_HEADER include/spdk/blob.h 00:04:50.937 TEST_HEADER include/spdk/conf.h 00:04:50.937 TEST_HEADER include/spdk/config.h 00:04:50.937 TEST_HEADER include/spdk/cpuset.h 00:04:50.937 TEST_HEADER include/spdk/crc16.h 00:04:50.937 TEST_HEADER include/spdk/crc32.h 00:04:50.937 TEST_HEADER include/spdk/crc64.h 00:04:50.937 TEST_HEADER include/spdk/dif.h 00:04:50.937 TEST_HEADER include/spdk/dma.h 00:04:50.937 TEST_HEADER include/spdk/endian.h 00:04:50.937 TEST_HEADER include/spdk/env_dpdk.h 00:04:50.937 TEST_HEADER include/spdk/env.h 00:04:50.937 TEST_HEADER include/spdk/event.h 00:04:50.937 TEST_HEADER include/spdk/fd_group.h 00:04:50.937 CC test/app/bdev_svc/bdev_svc.o 00:04:50.937 TEST_HEADER include/spdk/fd.h 00:04:50.937 TEST_HEADER include/spdk/file.h 00:04:50.937 TEST_HEADER include/spdk/fsdev.h 00:04:50.937 TEST_HEADER include/spdk/fsdev_module.h 00:04:50.937 TEST_HEADER include/spdk/ftl.h 00:04:50.937 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:50.937 TEST_HEADER include/spdk/gpt_spec.h 00:04:50.937 TEST_HEADER include/spdk/hexlify.h 00:04:50.937 TEST_HEADER include/spdk/histogram_data.h 00:04:50.937 TEST_HEADER include/spdk/idxd.h 00:04:50.937 TEST_HEADER include/spdk/idxd_spec.h 00:04:50.937 TEST_HEADER include/spdk/init.h 00:04:50.937 TEST_HEADER include/spdk/ioat.h 00:04:50.937 TEST_HEADER include/spdk/ioat_spec.h 00:04:50.937 TEST_HEADER include/spdk/iscsi_spec.h 00:04:50.937 TEST_HEADER include/spdk/json.h 00:04:51.195 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.195 TEST_HEADER include/spdk/keyring.h 00:04:51.195 TEST_HEADER include/spdk/keyring_module.h 00:04:51.195 TEST_HEADER include/spdk/likely.h 00:04:51.195 TEST_HEADER include/spdk/log.h 00:04:51.195 TEST_HEADER include/spdk/lvol.h 00:04:51.195 LINK spdk_nvme_discover 00:04:51.195 TEST_HEADER include/spdk/md5.h 00:04:51.195 TEST_HEADER include/spdk/memory.h 00:04:51.195 TEST_HEADER include/spdk/mmio.h 00:04:51.195 TEST_HEADER include/spdk/nbd.h 00:04:51.195 TEST_HEADER include/spdk/net.h 00:04:51.195 TEST_HEADER include/spdk/notify.h 00:04:51.195 TEST_HEADER include/spdk/nvme.h 00:04:51.195 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.195 LINK interrupt_tgt 00:04:51.195 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.195 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.195 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.195 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.195 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.195 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.195 TEST_HEADER include/spdk/nvmf.h 00:04:51.195 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.195 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.195 LINK ioat_perf 00:04:51.195 TEST_HEADER include/spdk/opal.h 00:04:51.195 TEST_HEADER include/spdk/opal_spec.h 00:04:51.195 TEST_HEADER include/spdk/pci_ids.h 00:04:51.195 TEST_HEADER include/spdk/pipe.h 00:04:51.195 TEST_HEADER include/spdk/queue.h 00:04:51.195 TEST_HEADER include/spdk/reduce.h 00:04:51.195 TEST_HEADER include/spdk/rpc.h 00:04:51.195 TEST_HEADER include/spdk/scheduler.h 00:04:51.195 TEST_HEADER include/spdk/scsi.h 00:04:51.195 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.195 TEST_HEADER include/spdk/sock.h 00:04:51.195 TEST_HEADER include/spdk/stdinc.h 00:04:51.195 TEST_HEADER include/spdk/string.h 00:04:51.195 TEST_HEADER include/spdk/thread.h 00:04:51.195 LINK verify 00:04:51.195 TEST_HEADER include/spdk/trace.h 00:04:51.195 TEST_HEADER include/spdk/trace_parser.h 00:04:51.195 TEST_HEADER include/spdk/tree.h 00:04:51.195 TEST_HEADER include/spdk/ublk.h 00:04:51.195 LINK bdev_svc 00:04:51.195 TEST_HEADER include/spdk/util.h 00:04:51.195 TEST_HEADER include/spdk/uuid.h 00:04:51.195 TEST_HEADER include/spdk/version.h 00:04:51.195 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.195 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.195 TEST_HEADER include/spdk/vhost.h 00:04:51.195 TEST_HEADER include/spdk/vmd.h 00:04:51.196 TEST_HEADER include/spdk/xor.h 00:04:51.196 TEST_HEADER include/spdk/zipf.h 00:04:51.196 LINK test_dma 00:04:51.196 CXX test/cpp_headers/accel.o 00:04:51.196 CXX test/cpp_headers/accel_module.o 00:04:51.196 LINK spdk_nvme_identify 00:04:51.454 LINK spdk_nvme_perf 00:04:51.454 CC test/event/event_perf/event_perf.o 00:04:51.454 CXX test/cpp_headers/assert.o 00:04:51.454 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.454 CC test/event/reactor/reactor.o 00:04:51.454 CC test/event/reactor_perf/reactor_perf.o 00:04:51.454 CC examples/thread/thread/thread_ex.o 00:04:51.454 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.454 CC test/event/app_repeat/app_repeat.o 00:04:51.454 CXX test/cpp_headers/barrier.o 00:04:51.454 CC test/app/histogram_perf/histogram_perf.o 00:04:51.454 LINK event_perf 00:04:51.713 LINK reactor 00:04:51.713 LINK reactor_perf 00:04:51.713 LINK histogram_perf 00:04:51.713 LINK app_repeat 00:04:51.713 CXX test/cpp_headers/base64.o 00:04:51.713 LINK thread 00:04:51.713 CC test/app/jsoncat/jsoncat.o 00:04:51.713 CC test/app/stub/stub.o 00:04:51.976 CC test/rpc_client/rpc_client_test.o 00:04:51.976 LINK spdk_top 00:04:51.976 CXX test/cpp_headers/bdev.o 00:04:51.976 CC examples/sock/hello_world/hello_sock.o 00:04:51.976 LINK jsoncat 00:04:51.976 LINK nvme_fuzz 00:04:51.976 CC test/event/scheduler/scheduler.o 00:04:51.976 LINK mem_callbacks 00:04:51.976 LINK stub 00:04:51.976 LINK rpc_client_test 00:04:51.976 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:51.976 CXX test/cpp_headers/bdev_module.o 00:04:51.976 CC app/spdk_dd/spdk_dd.o 00:04:52.236 LINK hello_sock 00:04:52.236 LINK scheduler 00:04:52.236 CC app/vhost/vhost.o 00:04:52.236 CC test/env/vtophys/vtophys.o 00:04:52.236 CC app/fio/nvme/fio_plugin.o 00:04:52.236 CXX test/cpp_headers/bdev_zone.o 00:04:52.236 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.236 CC app/fio/bdev/fio_plugin.o 00:04:52.236 LINK vtophys 00:04:52.236 CXX test/cpp_headers/bit_array.o 00:04:52.236 LINK vhost 00:04:52.236 LINK env_dpdk_post_init 00:04:52.496 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.496 LINK spdk_dd 00:04:52.496 CC examples/vmd/led/led.o 00:04:52.496 CXX test/cpp_headers/bit_pool.o 00:04:52.496 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.496 CC test/env/memory/memory_ut.o 00:04:52.496 LINK lsvmd 00:04:52.496 LINK led 00:04:52.758 CC test/env/pci/pci_ut.o 00:04:52.758 CXX test/cpp_headers/blob_bdev.o 00:04:52.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.758 LINK spdk_bdev 00:04:52.758 LINK spdk_nvme 00:04:52.758 CXX test/cpp_headers/blobfs_bdev.o 00:04:52.758 CC examples/idxd/perf/perf.o 00:04:53.086 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:53.086 CC examples/accel/perf/accel_perf.o 00:04:53.086 CXX test/cpp_headers/blobfs.o 00:04:53.086 LINK pci_ut 00:04:53.086 LINK vhost_fuzz 00:04:53.086 CC test/accel/dif/dif.o 00:04:53.086 CC test/blobfs/mkfs/mkfs.o 00:04:53.086 CXX test/cpp_headers/blob.o 00:04:53.348 LINK idxd_perf 00:04:53.348 LINK hello_fsdev 00:04:53.348 LINK mkfs 00:04:53.348 CXX test/cpp_headers/conf.o 00:04:53.348 CXX test/cpp_headers/config.o 00:04:53.348 CXX test/cpp_headers/cpuset.o 00:04:53.610 CXX test/cpp_headers/crc16.o 00:04:53.610 CC test/lvol/esnap/esnap.o 00:04:53.610 LINK accel_perf 00:04:53.610 CXX test/cpp_headers/crc32.o 00:04:53.610 CC test/nvme/aer/aer.o 00:04:53.610 CC examples/blob/hello_world/hello_blob.o 00:04:53.872 CXX test/cpp_headers/crc64.o 00:04:53.872 CC examples/nvme/hello_world/hello_world.o 00:04:53.872 CC examples/nvme/reconnect/reconnect.o 00:04:53.872 LINK memory_ut 00:04:53.872 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:53.872 LINK dif 00:04:53.872 CXX test/cpp_headers/dif.o 00:04:53.872 LINK aer 00:04:53.872 LINK hello_blob 00:04:53.872 LINK iscsi_fuzz 00:04:54.132 LINK hello_world 00:04:54.132 CC examples/nvme/arbitration/arbitration.o 00:04:54.132 CXX test/cpp_headers/dma.o 00:04:54.132 CC test/nvme/reset/reset.o 00:04:54.132 CC test/nvme/sgl/sgl.o 00:04:54.132 LINK reconnect 00:04:54.132 CXX test/cpp_headers/endian.o 00:04:54.394 CC examples/blob/cli/blobcli.o 00:04:54.394 CC test/nvme/e2edp/nvme_dp.o 00:04:54.394 CC test/nvme/overhead/overhead.o 00:04:54.394 LINK nvme_manage 00:04:54.394 CXX test/cpp_headers/env_dpdk.o 00:04:54.394 CC test/nvme/err_injection/err_injection.o 00:04:54.394 LINK arbitration 00:04:54.394 LINK reset 00:04:54.394 LINK sgl 00:04:54.654 LINK nvme_dp 00:04:54.654 CXX test/cpp_headers/env.o 00:04:54.654 CXX test/cpp_headers/event.o 00:04:54.654 LINK overhead 00:04:54.654 LINK err_injection 00:04:54.654 CC test/nvme/startup/startup.o 00:04:54.654 CC examples/nvme/hotplug/hotplug.o 00:04:54.912 CXX test/cpp_headers/fd_group.o 00:04:54.912 CXX test/cpp_headers/fd.o 00:04:54.912 LINK blobcli 00:04:54.912 CC examples/bdev/hello_world/hello_bdev.o 00:04:54.912 CC test/nvme/simple_copy/simple_copy.o 00:04:54.912 CC examples/bdev/bdevperf/bdevperf.o 00:04:54.912 CC test/nvme/reserve/reserve.o 00:04:54.912 LINK startup 00:04:54.912 CXX test/cpp_headers/file.o 00:04:54.912 LINK hotplug 00:04:54.912 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.169 LINK hello_bdev 00:04:55.169 CC examples/nvme/abort/abort.o 00:04:55.169 LINK simple_copy 00:04:55.169 CXX test/cpp_headers/fsdev.o 00:04:55.169 LINK reserve 00:04:55.169 CXX test/cpp_headers/fsdev_module.o 00:04:55.169 CC test/nvme/connect_stress/connect_stress.o 00:04:55.169 LINK cmb_copy 00:04:55.169 CXX test/cpp_headers/ftl.o 00:04:55.169 CXX test/cpp_headers/fuse_dispatcher.o 00:04:55.169 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:55.169 CXX test/cpp_headers/gpt_spec.o 00:04:55.169 LINK connect_stress 00:04:55.427 CXX test/cpp_headers/hexlify.o 00:04:55.427 CC test/nvme/boot_partition/boot_partition.o 00:04:55.427 CC test/bdev/bdevio/bdevio.o 00:04:55.427 LINK abort 00:04:55.427 LINK pmr_persistence 00:04:55.427 CC test/nvme/compliance/nvme_compliance.o 00:04:55.427 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.427 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.427 CXX test/cpp_headers/histogram_data.o 00:04:55.427 CXX test/cpp_headers/idxd.o 00:04:55.685 LINK boot_partition 00:04:55.685 CC test/nvme/fdp/fdp.o 00:04:55.685 LINK fused_ordering 00:04:55.685 CXX test/cpp_headers/idxd_spec.o 00:04:55.685 LINK doorbell_aers 00:04:55.685 CXX test/cpp_headers/init.o 00:04:55.685 CXX test/cpp_headers/ioat.o 00:04:55.685 CXX test/cpp_headers/ioat_spec.o 00:04:55.685 LINK bdevperf 00:04:55.685 LINK bdevio 00:04:55.685 LINK nvme_compliance 00:04:55.685 CXX test/cpp_headers/iscsi_spec.o 00:04:55.942 CXX test/cpp_headers/json.o 00:04:55.942 CXX test/cpp_headers/jsonrpc.o 00:04:55.942 CC test/nvme/cuse/cuse.o 00:04:55.942 CXX test/cpp_headers/keyring.o 00:04:55.942 CXX test/cpp_headers/keyring_module.o 00:04:55.942 LINK fdp 00:04:55.942 CXX test/cpp_headers/likely.o 00:04:55.942 CXX test/cpp_headers/log.o 00:04:55.942 CXX test/cpp_headers/lvol.o 00:04:55.942 CXX test/cpp_headers/md5.o 00:04:55.942 CXX test/cpp_headers/memory.o 00:04:55.942 CXX test/cpp_headers/mmio.o 00:04:55.942 CXX test/cpp_headers/nbd.o 00:04:55.942 CXX test/cpp_headers/net.o 00:04:56.200 CXX test/cpp_headers/notify.o 00:04:56.200 CC examples/nvmf/nvmf/nvmf.o 00:04:56.200 CXX test/cpp_headers/nvme.o 00:04:56.200 CXX test/cpp_headers/nvme_intel.o 00:04:56.200 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.200 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.200 CXX test/cpp_headers/nvme_spec.o 00:04:56.200 CXX test/cpp_headers/nvme_zns.o 00:04:56.200 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.200 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.200 CXX test/cpp_headers/nvmf.o 00:04:56.200 CXX test/cpp_headers/nvmf_spec.o 00:04:56.200 CXX test/cpp_headers/nvmf_transport.o 00:04:56.457 CXX test/cpp_headers/opal.o 00:04:56.457 CXX test/cpp_headers/opal_spec.o 00:04:56.457 LINK nvmf 00:04:56.457 CXX test/cpp_headers/pci_ids.o 00:04:56.457 CXX test/cpp_headers/pipe.o 00:04:56.457 CXX test/cpp_headers/queue.o 00:04:56.457 CXX test/cpp_headers/reduce.o 00:04:56.457 CXX test/cpp_headers/rpc.o 00:04:56.457 CXX test/cpp_headers/scheduler.o 00:04:56.457 CXX test/cpp_headers/scsi.o 00:04:56.457 CXX test/cpp_headers/scsi_spec.o 00:04:56.457 CXX test/cpp_headers/sock.o 00:04:56.457 CXX test/cpp_headers/stdinc.o 00:04:56.458 CXX test/cpp_headers/string.o 00:04:56.458 CXX test/cpp_headers/thread.o 00:04:56.458 CXX test/cpp_headers/trace.o 00:04:56.716 CXX test/cpp_headers/trace_parser.o 00:04:56.716 CXX test/cpp_headers/tree.o 00:04:56.716 CXX test/cpp_headers/ublk.o 00:04:56.716 CXX test/cpp_headers/util.o 00:04:56.716 CXX test/cpp_headers/uuid.o 00:04:56.716 CXX test/cpp_headers/version.o 00:04:56.716 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.716 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.716 CXX test/cpp_headers/vhost.o 00:04:56.716 CXX test/cpp_headers/vmd.o 00:04:56.716 CXX test/cpp_headers/zipf.o 00:04:56.716 CXX test/cpp_headers/xor.o 00:04:56.973 LINK cuse 00:04:58.882 LINK esnap 00:04:58.882 00:04:58.882 real 1m2.499s 00:04:58.882 user 5m5.718s 00:04:58.882 sys 0m53.620s 00:04:58.882 03:13:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:58.882 ************************************ 00:04:58.882 END TEST make 00:04:58.882 ************************************ 00:04:58.882 03:13:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:58.882 03:13:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:58.882 03:13:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:58.882 03:13:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.882 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.882 03:13:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:59.142 03:13:46 -- pm/common@44 -- $ pid=5798 00:04:59.142 03:13:46 -- pm/common@50 -- $ kill -TERM 5798 00:04:59.142 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.142 03:13:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:59.142 03:13:46 -- pm/common@44 -- $ pid=5800 00:04:59.142 03:13:46 -- pm/common@50 -- $ kill -TERM 5800 00:04:59.142 03:13:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:59.142 03:13:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:59.142 03:13:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.142 03:13:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.142 03:13:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.142 03:13:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.142 03:13:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.142 03:13:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.142 03:13:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.142 03:13:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.142 03:13:46 -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.142 03:13:46 -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.142 03:13:46 -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.142 03:13:46 -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.142 03:13:46 -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.142 03:13:46 -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.142 03:13:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.142 03:13:46 -- scripts/common.sh@344 -- # case "$op" in 00:04:59.142 03:13:46 -- scripts/common.sh@345 -- # : 1 00:04:59.142 03:13:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.142 03:13:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.142 03:13:46 -- scripts/common.sh@365 -- # decimal 1 00:04:59.142 03:13:46 -- scripts/common.sh@353 -- # local d=1 00:04:59.142 03:13:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.142 03:13:46 -- scripts/common.sh@355 -- # echo 1 00:04:59.142 03:13:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.142 03:13:46 -- scripts/common.sh@366 -- # decimal 2 00:04:59.142 03:13:46 -- scripts/common.sh@353 -- # local d=2 00:04:59.142 03:13:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.142 03:13:46 -- scripts/common.sh@355 -- # echo 2 00:04:59.142 03:13:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.142 03:13:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.142 03:13:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.142 03:13:46 -- scripts/common.sh@368 -- # return 0 00:04:59.142 03:13:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.142 03:13:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.142 --rc genhtml_branch_coverage=1 00:04:59.142 --rc genhtml_function_coverage=1 00:04:59.142 --rc genhtml_legend=1 00:04:59.142 --rc geninfo_all_blocks=1 00:04:59.142 --rc geninfo_unexecuted_blocks=1 00:04:59.142 00:04:59.142 ' 00:04:59.142 03:13:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.142 --rc genhtml_branch_coverage=1 00:04:59.142 --rc genhtml_function_coverage=1 00:04:59.142 --rc genhtml_legend=1 00:04:59.142 --rc geninfo_all_blocks=1 00:04:59.142 --rc geninfo_unexecuted_blocks=1 00:04:59.142 00:04:59.142 ' 00:04:59.142 03:13:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.142 --rc genhtml_branch_coverage=1 00:04:59.142 --rc genhtml_function_coverage=1 00:04:59.142 --rc genhtml_legend=1 00:04:59.142 --rc geninfo_all_blocks=1 00:04:59.142 --rc geninfo_unexecuted_blocks=1 00:04:59.142 00:04:59.142 ' 00:04:59.142 03:13:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.142 --rc genhtml_branch_coverage=1 00:04:59.142 --rc genhtml_function_coverage=1 00:04:59.142 --rc genhtml_legend=1 00:04:59.142 --rc geninfo_all_blocks=1 00:04:59.142 --rc geninfo_unexecuted_blocks=1 00:04:59.142 00:04:59.142 ' 00:04:59.142 03:13:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.142 03:13:46 -- nvmf/common.sh@7 -- # uname -s 00:04:59.142 03:13:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.142 03:13:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.142 03:13:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.142 03:13:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.142 03:13:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.142 03:13:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.142 03:13:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.142 03:13:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.142 03:13:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.142 03:13:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.142 03:13:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55306210-7915-454d-a0b9-4b0948f508be 00:04:59.142 03:13:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=55306210-7915-454d-a0b9-4b0948f508be 00:04:59.142 03:13:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.142 03:13:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.142 03:13:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.142 03:13:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.142 03:13:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.142 03:13:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.142 03:13:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.142 03:13:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.142 03:13:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.142 03:13:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.142 03:13:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.143 03:13:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.143 03:13:46 -- paths/export.sh@5 -- # export PATH 00:04:59.143 03:13:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.143 03:13:46 -- nvmf/common.sh@51 -- # : 0 00:04:59.143 03:13:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.143 03:13:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.143 03:13:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.143 03:13:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.143 03:13:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.143 03:13:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.143 03:13:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.143 03:13:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.143 03:13:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.143 03:13:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.143 03:13:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.143 03:13:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.143 03:13:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.143 03:13:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.143 03:13:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.143 03:13:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.143 03:13:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.405 03:13:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.405 03:13:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.405 03:13:46 -- spdk/autotest.sh@48 -- # udevadm_pid=68050 00:04:59.405 03:13:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.405 03:13:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.405 03:13:46 -- pm/common@17 -- # local monitor 00:04:59.405 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.405 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.405 03:13:46 -- pm/common@25 -- # sleep 1 00:04:59.405 03:13:46 -- pm/common@21 -- # date +%s 00:04:59.405 03:13:46 -- pm/common@21 -- # date +%s 00:04:59.405 03:13:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732158826 00:04:59.405 03:13:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732158826 00:04:59.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732158826_collect-vmstat.pm.log 00:04:59.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732158826_collect-cpu-load.pm.log 00:05:00.376 03:13:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.376 03:13:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.376 03:13:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.376 03:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.376 03:13:47 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.376 03:13:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:00.376 03:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.376 03:13:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.376 03:13:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.376 03:13:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.376 03:13:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.376 03:13:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.376 03:13:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.376 03:13:47 -- common/autotest_common.sh@1457 -- # uname 00:05:00.376 03:13:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:00.376 03:13:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.376 03:13:47 -- common/autotest_common.sh@1477 -- # uname 00:05:00.376 03:13:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:00.376 03:13:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:00.376 03:13:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:00.376 lcov: LCOV version 1.15 00:05:00.376 03:13:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.273 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.273 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.152 03:14:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.152 03:14:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.152 03:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.152 03:14:16 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.152 03:14:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.152 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.411 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.411 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:30.411 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:30.411 03:14:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:30.411 03:14:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:30.411 03:14:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:30.411 03:14:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.411 03:14:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:30.411 03:14:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.411 03:14:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.411 03:14:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:30.411 03:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.411 03:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.411 03:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:30.411 03:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:30.411 03:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.411 No valid GPT data, bailing 00:05:30.411 03:14:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.411 03:14:17 -- scripts/common.sh@394 -- # pt= 00:05:30.411 03:14:17 -- scripts/common.sh@395 -- # return 1 00:05:30.411 03:14:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.411 1+0 records in 00:05:30.411 1+0 records out 00:05:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301637 s, 34.8 MB/s 00:05:30.411 03:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.411 03:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.411 03:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:30.411 03:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:30.411 03:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.411 No valid GPT data, bailing 00:05:30.411 03:14:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.411 03:14:17 -- scripts/common.sh@394 -- # pt= 00:05:30.411 03:14:17 -- scripts/common.sh@395 -- # return 1 00:05:30.411 03:14:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:30.411 1+0 records in 00:05:30.411 1+0 records out 00:05:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447108 s, 235 MB/s 00:05:30.411 03:14:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.411 03:14:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.411 03:14:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:30.411 03:14:17 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:30.411 03:14:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:30.671 No valid GPT data, bailing 00:05:30.671 03:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:30.672 03:14:18 -- scripts/common.sh@394 -- # pt= 00:05:30.672 03:14:18 -- scripts/common.sh@395 -- # return 1 00:05:30.672 03:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:30.672 1+0 records in 00:05:30.672 1+0 records out 00:05:30.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483775 s, 217 MB/s 00:05:30.672 03:14:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.672 03:14:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.672 03:14:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:30.672 03:14:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:30.672 03:14:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:30.672 No valid GPT data, bailing 00:05:30.672 03:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:30.672 03:14:18 -- scripts/common.sh@394 -- # pt= 00:05:30.672 03:14:18 -- scripts/common.sh@395 -- # return 1 00:05:30.672 03:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:30.672 1+0 records in 00:05:30.672 1+0 records out 00:05:30.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386416 s, 271 MB/s 00:05:30.672 03:14:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.672 03:14:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.672 03:14:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:30.672 03:14:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:30.672 03:14:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:30.672 No valid GPT data, bailing 00:05:30.672 03:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:30.672 03:14:18 -- scripts/common.sh@394 -- # pt= 00:05:30.672 03:14:18 -- scripts/common.sh@395 -- # return 1 00:05:30.672 03:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:30.672 1+0 records in 00:05:30.672 1+0 records out 00:05:30.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058862 s, 178 MB/s 00:05:30.672 03:14:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.672 03:14:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.672 03:14:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:30.672 03:14:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:30.672 03:14:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:30.933 No valid GPT data, bailing 00:05:30.933 03:14:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:30.933 03:14:18 -- scripts/common.sh@394 -- # pt= 00:05:30.933 03:14:18 -- scripts/common.sh@395 -- # return 1 00:05:30.933 03:14:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:30.933 1+0 records in 00:05:30.933 1+0 records out 00:05:30.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472393 s, 222 MB/s 00:05:30.934 03:14:18 -- spdk/autotest.sh@105 -- # sync 00:05:30.934 03:14:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.934 03:14:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.934 03:14:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.847 03:14:20 -- spdk/autotest.sh@111 -- # uname -s 00:05:32.847 03:14:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:32.847 03:14:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:32.847 03:14:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.681 Hugepages 00:05:33.681 node hugesize free / total 00:05:33.681 node0 1048576kB 0 / 0 00:05:33.681 node0 2048kB 0 / 0 00:05:33.681 00:05:33.681 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.942 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.942 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.942 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:33.942 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:34.204 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:34.204 03:14:21 -- spdk/autotest.sh@117 -- # uname -s 00:05:34.204 03:14:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:34.204 03:14:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:34.204 03:14:21 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.038 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.038 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.038 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.038 03:14:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:36.414 03:14:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:36.414 03:14:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:36.414 03:14:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.414 03:14:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:36.414 03:14:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:36.414 03:14:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:36.414 03:14:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.414 03:14:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.414 03:14:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.414 03:14:23 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:36.414 03:14:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.414 03:14:23 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.673 Waiting for block devices as requested 00:05:36.673 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.673 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.673 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.931 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.218 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:42.218 03:14:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.218 03:14:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.218 03:14:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.218 03:14:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.218 03:14:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.218 03:14:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:42.218 03:14:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.218 03:14:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.218 03:14:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.218 03:14:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.218 03:14:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.218 03:14:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:42.218 03:14:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.218 03:14:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1543 -- # continue 00:05:42.219 03:14:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.219 03:14:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1543 -- # continue 00:05:42.219 03:14:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.219 03:14:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1543 -- # continue 00:05:42.219 03:14:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.219 03:14:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.219 03:14:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.219 03:14:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.219 03:14:29 -- common/autotest_common.sh@1543 -- # continue 00:05:42.219 03:14:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.219 03:14:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.219 03:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:42.219 03:14:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.219 03:14:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.219 03:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:42.219 03:14:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.044 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.044 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.044 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.045 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.306 03:14:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.306 03:14:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.306 03:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.306 03:14:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.306 03:14:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:43.306 03:14:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.306 03:14:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:43.306 03:14:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:43.306 03:14:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:43.306 03:14:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.306 03:14:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:43.306 03:14:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:43.306 03:14:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:43.306 03:14:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.306 03:14:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:43.306 03:14:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.306 03:14:30 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:43.306 03:14:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:43.306 03:14:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.306 03:14:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.306 03:14:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.306 03:14:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.306 03:14:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.306 03:14:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.306 03:14:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:43.306 03:14:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:43.306 03:14:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.306 03:14:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:43.306 03:14:30 -- common/autotest_common.sh@1572 -- # return 0 00:05:43.306 03:14:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:43.306 03:14:30 -- common/autotest_common.sh@1580 -- # return 0 00:05:43.306 03:14:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:43.306 03:14:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:43.306 03:14:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.306 03:14:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.306 03:14:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:43.306 03:14:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.306 03:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.306 03:14:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:43.306 03:14:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.306 03:14:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.306 03:14:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.306 03:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.306 ************************************ 00:05:43.306 START TEST env 00:05:43.306 ************************************ 00:05:43.306 03:14:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.306 * Looking for test storage... 00:05:43.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.306 03:14:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.306 03:14:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.306 03:14:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.568 03:14:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.568 03:14:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.568 03:14:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.568 03:14:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.568 03:14:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.568 03:14:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.568 03:14:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.568 03:14:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.568 03:14:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.568 03:14:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.568 03:14:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.568 03:14:30 env -- scripts/common.sh@344 -- # case "$op" in 00:05:43.568 03:14:30 env -- scripts/common.sh@345 -- # : 1 00:05:43.568 03:14:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.568 03:14:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.568 03:14:30 env -- scripts/common.sh@365 -- # decimal 1 00:05:43.568 03:14:30 env -- scripts/common.sh@353 -- # local d=1 00:05:43.568 03:14:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.568 03:14:30 env -- scripts/common.sh@355 -- # echo 1 00:05:43.568 03:14:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.568 03:14:30 env -- scripts/common.sh@366 -- # decimal 2 00:05:43.568 03:14:30 env -- scripts/common.sh@353 -- # local d=2 00:05:43.568 03:14:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.568 03:14:30 env -- scripts/common.sh@355 -- # echo 2 00:05:43.568 03:14:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.568 03:14:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.568 03:14:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.568 03:14:30 env -- scripts/common.sh@368 -- # return 0 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.568 --rc genhtml_branch_coverage=1 00:05:43.568 --rc genhtml_function_coverage=1 00:05:43.568 --rc genhtml_legend=1 00:05:43.568 --rc geninfo_all_blocks=1 00:05:43.568 --rc geninfo_unexecuted_blocks=1 00:05:43.568 00:05:43.568 ' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.568 --rc genhtml_branch_coverage=1 00:05:43.568 --rc genhtml_function_coverage=1 00:05:43.568 --rc genhtml_legend=1 00:05:43.568 --rc geninfo_all_blocks=1 00:05:43.568 --rc geninfo_unexecuted_blocks=1 00:05:43.568 00:05:43.568 ' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.568 --rc genhtml_branch_coverage=1 00:05:43.568 --rc genhtml_function_coverage=1 00:05:43.568 --rc genhtml_legend=1 00:05:43.568 --rc geninfo_all_blocks=1 00:05:43.568 --rc geninfo_unexecuted_blocks=1 00:05:43.568 00:05:43.568 ' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.568 --rc genhtml_branch_coverage=1 00:05:43.568 --rc genhtml_function_coverage=1 00:05:43.568 --rc genhtml_legend=1 00:05:43.568 --rc geninfo_all_blocks=1 00:05:43.568 --rc geninfo_unexecuted_blocks=1 00:05:43.568 00:05:43.568 ' 00:05:43.568 03:14:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.568 03:14:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.568 03:14:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.568 ************************************ 00:05:43.568 START TEST env_memory 00:05:43.568 ************************************ 00:05:43.568 03:14:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.568 00:05:43.568 00:05:43.568 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.568 http://cunit.sourceforge.net/ 00:05:43.568 00:05:43.568 00:05:43.568 Suite: memory 00:05:43.568 Test: alloc and free memory map ...[2024-11-21 03:14:31.001600] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.568 passed 00:05:43.568 Test: mem map translation ...[2024-11-21 03:14:31.040345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.568 [2024-11-21 03:14:31.040394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.568 [2024-11-21 03:14:31.040451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.568 [2024-11-21 03:14:31.040466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.568 passed 00:05:43.568 Test: mem map registration ...[2024-11-21 03:14:31.108569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:43.568 [2024-11-21 03:14:31.108620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:43.831 passed 00:05:43.831 Test: mem map adjacent registrations ...passed 00:05:43.831 00:05:43.831 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.831 suites 1 1 n/a 0 0 00:05:43.831 tests 4 4 4 0 0 00:05:43.831 asserts 152 152 152 0 n/a 00:05:43.831 00:05:43.831 Elapsed time = 0.233 seconds 00:05:43.831 00:05:43.831 real 0m0.273s 00:05:43.831 user 0m0.245s 00:05:43.831 sys 0m0.018s 00:05:43.831 03:14:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.831 03:14:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.831 ************************************ 00:05:43.831 END TEST env_memory 00:05:43.831 ************************************ 00:05:43.831 03:14:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.831 03:14:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.831 03:14:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.831 03:14:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.831 ************************************ 00:05:43.831 START TEST env_vtophys 00:05:43.831 ************************************ 00:05:43.831 03:14:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.831 EAL: lib.eal log level changed from notice to debug 00:05:43.831 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.831 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.831 EAL: Maximum logical cores by configuration: 128 00:05:43.831 EAL: Detected CPU lcores: 10 00:05:43.831 EAL: Detected NUMA nodes: 1 00:05:43.831 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:43.831 EAL: Detected shared linkage of DPDK 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:43.831 EAL: Registered [vdev] bus. 00:05:43.831 EAL: bus.vdev log level changed from disabled to notice 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:43.831 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:43.831 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:43.831 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:43.831 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.831 EAL: No shared files mode enabled, IPC is disabled 00:05:43.831 EAL: Selected IOVA mode 'PA' 00:05:43.831 EAL: Probing VFIO support... 00:05:43.831 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.831 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.831 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.831 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.831 EAL: Setting up physically contiguous memory... 00:05:43.831 EAL: Setting maximum number of open files to 524288 00:05:43.831 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.831 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.831 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.831 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.831 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.831 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.831 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.831 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.831 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.831 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.831 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.831 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.832 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.832 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.832 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.832 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.832 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.832 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.832 EAL: Hugepages will be freed exactly as allocated. 00:05:43.832 EAL: No shared files mode enabled, IPC is disabled 00:05:43.832 EAL: No shared files mode enabled, IPC is disabled 00:05:44.092 EAL: TSC frequency is ~2600000 KHz 00:05:44.092 EAL: Main lcore 0 is ready (tid=7fd88b316a40;cpuset=[0]) 00:05:44.092 EAL: Trying to obtain current memory policy. 00:05:44.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.092 EAL: Restoring previous memory policy: 0 00:05:44.092 EAL: request: mp_malloc_sync 00:05:44.092 EAL: No shared files mode enabled, IPC is disabled 00:05:44.092 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.092 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:44.092 EAL: No shared files mode enabled, IPC is disabled 00:05:44.092 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.092 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.092 00:05:44.092 00:05:44.092 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.092 http://cunit.sourceforge.net/ 00:05:44.092 00:05:44.092 00:05:44.092 Suite: components_suite 00:05:44.351 Test: vtophys_malloc_test ...passed 00:05:44.351 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.351 EAL: Restoring previous memory policy: 4 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.351 EAL: Trying to obtain current memory policy. 00:05:44.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.351 EAL: Restoring previous memory policy: 4 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.351 EAL: Trying to obtain current memory policy. 00:05:44.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.351 EAL: Restoring previous memory policy: 4 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.351 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.351 EAL: request: mp_malloc_sync 00:05:44.351 EAL: No shared files mode enabled, IPC is disabled 00:05:44.351 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.351 EAL: Trying to obtain current memory policy. 00:05:44.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.352 EAL: Restoring previous memory policy: 4 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.352 EAL: Trying to obtain current memory policy. 00:05:44.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.352 EAL: Restoring previous memory policy: 4 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.352 EAL: Trying to obtain current memory policy. 00:05:44.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.352 EAL: Restoring previous memory policy: 4 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.352 EAL: Trying to obtain current memory policy. 00:05:44.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.352 EAL: Restoring previous memory policy: 4 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was expanded by 130MB 00:05:44.352 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.352 EAL: request: mp_malloc_sync 00:05:44.352 EAL: No shared files mode enabled, IPC is disabled 00:05:44.352 EAL: Heap on socket 0 was shrunk by 130MB 00:05:44.352 EAL: Trying to obtain current memory policy. 00:05:44.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.612 EAL: Restoring previous memory policy: 4 00:05:44.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.612 EAL: request: mp_malloc_sync 00:05:44.612 EAL: No shared files mode enabled, IPC is disabled 00:05:44.612 EAL: Heap on socket 0 was expanded by 258MB 00:05:44.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.612 EAL: request: mp_malloc_sync 00:05:44.612 EAL: No shared files mode enabled, IPC is disabled 00:05:44.612 EAL: Heap on socket 0 was shrunk by 258MB 00:05:44.612 EAL: Trying to obtain current memory policy. 00:05:44.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.612 EAL: Restoring previous memory policy: 4 00:05:44.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.612 EAL: request: mp_malloc_sync 00:05:44.612 EAL: No shared files mode enabled, IPC is disabled 00:05:44.612 EAL: Heap on socket 0 was expanded by 514MB 00:05:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.874 EAL: request: mp_malloc_sync 00:05:44.874 EAL: No shared files mode enabled, IPC is disabled 00:05:44.874 EAL: Heap on socket 0 was shrunk by 514MB 00:05:44.874 EAL: Trying to obtain current memory policy. 00:05:44.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.135 EAL: Restoring previous memory policy: 4 00:05:45.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.135 EAL: request: mp_malloc_sync 00:05:45.135 EAL: No shared files mode enabled, IPC is disabled 00:05:45.135 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.425 passed 00:05:45.425 00:05:45.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.425 suites 1 1 n/a 0 0 00:05:45.425 tests 2 2 2 0 0 00:05:45.425 asserts 5267 5267 5267 0 n/a 00:05:45.425 00:05:45.425 Elapsed time = 1.382 seconds 00:05:45.425 EAL: request: mp_malloc_sync 00:05:45.425 EAL: No shared files mode enabled, IPC is disabled 00:05:45.425 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.425 EAL: request: mp_malloc_sync 00:05:45.425 EAL: No shared files mode enabled, IPC is disabled 00:05:45.425 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.425 EAL: No shared files mode enabled, IPC is disabled 00:05:45.425 EAL: No shared files mode enabled, IPC is disabled 00:05:45.425 EAL: No shared files mode enabled, IPC is disabled 00:05:45.425 00:05:45.425 real 0m1.642s 00:05:45.425 user 0m0.697s 00:05:45.425 sys 0m0.804s 00:05:45.425 03:14:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.425 03:14:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:45.425 ************************************ 00:05:45.425 END TEST env_vtophys 00:05:45.425 ************************************ 00:05:45.425 03:14:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.425 03:14:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.425 03:14:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.425 03:14:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 ************************************ 00:05:45.690 START TEST env_pci 00:05:45.690 ************************************ 00:05:45.690 03:14:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.690 00:05:45.690 00:05:45.690 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.690 http://cunit.sourceforge.net/ 00:05:45.690 00:05:45.690 00:05:45.690 Suite: pci 00:05:45.690 Test: pci_hook ...[2024-11-21 03:14:33.015948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70790 has claimed it 00:05:45.690 passed 00:05:45.690 00:05:45.690 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.690 suites 1 1 n/a 0 0 00:05:45.690 tests 1 1 1 0 0 00:05:45.690 asserts 25 25 25 0 n/a 00:05:45.690 00:05:45.690 Elapsed time = 0.006 seconds 00:05:45.690 EAL: Cannot find device (10000:00:01.0) 00:05:45.690 EAL: Failed to attach device on primary process 00:05:45.690 00:05:45.690 real 0m0.072s 00:05:45.690 user 0m0.034s 00:05:45.690 sys 0m0.038s 00:05:45.690 03:14:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.690 ************************************ 00:05:45.690 END TEST env_pci 00:05:45.690 ************************************ 00:05:45.690 03:14:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 03:14:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.690 03:14:33 env -- env/env.sh@15 -- # uname 00:05:45.690 03:14:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.690 03:14:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.690 03:14:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.690 03:14:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:45.690 03:14:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.690 03:14:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 ************************************ 00:05:45.690 START TEST env_dpdk_post_init 00:05:45.691 ************************************ 00:05:45.691 03:14:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.691 EAL: Detected CPU lcores: 10 00:05:45.691 EAL: Detected NUMA nodes: 1 00:05:45.691 EAL: Detected shared linkage of DPDK 00:05:45.691 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.691 EAL: Selected IOVA mode 'PA' 00:05:45.951 Starting DPDK initialization... 00:05:45.951 Starting SPDK post initialization... 00:05:45.951 SPDK NVMe probe 00:05:45.951 Attaching to 0000:00:10.0 00:05:45.951 Attaching to 0000:00:11.0 00:05:45.951 Attaching to 0000:00:12.0 00:05:45.951 Attaching to 0000:00:13.0 00:05:45.951 Attached to 0000:00:13.0 00:05:45.951 Attached to 0000:00:10.0 00:05:45.951 Attached to 0000:00:11.0 00:05:45.951 Attached to 0000:00:12.0 00:05:45.951 Cleaning up... 00:05:45.951 00:05:45.951 real 0m0.247s 00:05:45.952 user 0m0.067s 00:05:45.952 sys 0m0.080s 00:05:45.952 ************************************ 00:05:45.952 END TEST env_dpdk_post_init 00:05:45.952 ************************************ 00:05:45.952 03:14:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.952 03:14:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.952 03:14:33 env -- env/env.sh@26 -- # uname 00:05:45.952 03:14:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.952 03:14:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.952 03:14:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.952 03:14:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.952 03:14:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.952 ************************************ 00:05:45.952 START TEST env_mem_callbacks 00:05:45.952 ************************************ 00:05:45.952 03:14:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.952 EAL: Detected CPU lcores: 10 00:05:45.952 EAL: Detected NUMA nodes: 1 00:05:45.952 EAL: Detected shared linkage of DPDK 00:05:45.952 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.952 EAL: Selected IOVA mode 'PA' 00:05:46.211 00:05:46.211 00:05:46.211 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.211 http://cunit.sourceforge.net/ 00:05:46.211 00:05:46.211 00:05:46.211 Suite: memory 00:05:46.211 Test: test ... 00:05:46.211 register 0x200000200000 2097152 00:05:46.211 malloc 3145728 00:05:46.211 register 0x200000400000 4194304 00:05:46.211 buf 0x200000500000 len 3145728 PASSED 00:05:46.211 malloc 64 00:05:46.211 buf 0x2000004fff40 len 64 PASSED 00:05:46.211 malloc 4194304 00:05:46.211 register 0x200000800000 6291456 00:05:46.211 buf 0x200000a00000 len 4194304 PASSED 00:05:46.211 free 0x200000500000 3145728 00:05:46.211 free 0x2000004fff40 64 00:05:46.211 unregister 0x200000400000 4194304 PASSED 00:05:46.211 free 0x200000a00000 4194304 00:05:46.211 unregister 0x200000800000 6291456 PASSED 00:05:46.211 malloc 8388608 00:05:46.211 register 0x200000400000 10485760 00:05:46.211 buf 0x200000600000 len 8388608 PASSED 00:05:46.211 free 0x200000600000 8388608 00:05:46.211 unregister 0x200000400000 10485760 PASSED 00:05:46.211 passed 00:05:46.211 00:05:46.211 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.211 suites 1 1 n/a 0 0 00:05:46.211 tests 1 1 1 0 0 00:05:46.211 asserts 15 15 15 0 n/a 00:05:46.211 00:05:46.211 Elapsed time = 0.010 seconds 00:05:46.211 00:05:46.211 real 0m0.186s 00:05:46.211 user 0m0.027s 00:05:46.211 sys 0m0.058s 00:05:46.211 03:14:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.211 ************************************ 00:05:46.211 END TEST env_mem_callbacks 00:05:46.211 ************************************ 00:05:46.211 03:14:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.211 00:05:46.211 real 0m2.877s 00:05:46.211 user 0m1.232s 00:05:46.211 sys 0m1.213s 00:05:46.211 03:14:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.211 ************************************ 00:05:46.211 END TEST env 00:05:46.211 ************************************ 00:05:46.211 03:14:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.211 03:14:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.211 03:14:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.211 03:14:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.211 03:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.211 ************************************ 00:05:46.211 START TEST rpc 00:05:46.211 ************************************ 00:05:46.211 03:14:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.471 * Looking for test storage... 00:05:46.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.471 03:14:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.471 03:14:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.471 03:14:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.471 03:14:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.471 03:14:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.471 03:14:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.471 03:14:33 rpc -- scripts/common.sh@345 -- # : 1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.471 03:14:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.471 03:14:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.471 03:14:33 rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.471 03:14:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.471 03:14:33 rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.471 03:14:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.471 03:14:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.471 03:14:33 rpc -- scripts/common.sh@368 -- # return 0 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.471 --rc genhtml_branch_coverage=1 00:05:46.471 --rc genhtml_function_coverage=1 00:05:46.471 --rc genhtml_legend=1 00:05:46.471 --rc geninfo_all_blocks=1 00:05:46.471 --rc geninfo_unexecuted_blocks=1 00:05:46.471 00:05:46.471 ' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.471 --rc genhtml_branch_coverage=1 00:05:46.471 --rc genhtml_function_coverage=1 00:05:46.471 --rc genhtml_legend=1 00:05:46.471 --rc geninfo_all_blocks=1 00:05:46.471 --rc geninfo_unexecuted_blocks=1 00:05:46.471 00:05:46.471 ' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.471 --rc genhtml_branch_coverage=1 00:05:46.471 --rc genhtml_function_coverage=1 00:05:46.471 --rc genhtml_legend=1 00:05:46.471 --rc geninfo_all_blocks=1 00:05:46.471 --rc geninfo_unexecuted_blocks=1 00:05:46.471 00:05:46.471 ' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.471 --rc genhtml_branch_coverage=1 00:05:46.471 --rc genhtml_function_coverage=1 00:05:46.471 --rc genhtml_legend=1 00:05:46.471 --rc geninfo_all_blocks=1 00:05:46.471 --rc geninfo_unexecuted_blocks=1 00:05:46.471 00:05:46.471 ' 00:05:46.471 03:14:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70917 00:05:46.471 03:14:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.471 03:14:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70917 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 70917 ']' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.471 03:14:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.471 03:14:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:46.471 [2024-11-21 03:14:33.962495] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:05:46.471 [2024-11-21 03:14:33.962620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:05:46.732 [2024-11-21 03:14:34.096021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.732 [2024-11-21 03:14:34.121720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.732 [2024-11-21 03:14:34.144831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.732 [2024-11-21 03:14:34.144886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70917' to capture a snapshot of events at runtime. 00:05:46.732 [2024-11-21 03:14:34.144914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.732 [2024-11-21 03:14:34.144926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.732 [2024-11-21 03:14:34.144934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70917 for offline analysis/debug. 00:05:46.732 [2024-11-21 03:14:34.145317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.306 03:14:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.306 03:14:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.306 03:14:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.306 03:14:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.306 03:14:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.306 03:14:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.306 03:14:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.306 03:14:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.306 03:14:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.306 ************************************ 00:05:47.306 START TEST rpc_integrity 00:05:47.306 ************************************ 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:47.306 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.306 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.306 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.306 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.306 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.306 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.567 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.567 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.567 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.567 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.567 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.567 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.567 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.567 { 00:05:47.567 "name": "Malloc0", 00:05:47.567 "aliases": [ 00:05:47.567 "a343e194-b4d5-4769-840b-93e18c871e9f" 00:05:47.567 ], 00:05:47.567 "product_name": "Malloc disk", 00:05:47.567 "block_size": 512, 00:05:47.567 "num_blocks": 16384, 00:05:47.567 "uuid": "a343e194-b4d5-4769-840b-93e18c871e9f", 00:05:47.567 "assigned_rate_limits": { 00:05:47.567 "rw_ios_per_sec": 0, 00:05:47.567 "rw_mbytes_per_sec": 0, 00:05:47.567 "r_mbytes_per_sec": 0, 00:05:47.567 "w_mbytes_per_sec": 0 00:05:47.567 }, 00:05:47.567 "claimed": false, 00:05:47.567 "zoned": false, 00:05:47.567 "supported_io_types": { 00:05:47.567 "read": true, 00:05:47.567 "write": true, 00:05:47.567 "unmap": true, 00:05:47.567 "flush": true, 00:05:47.567 "reset": true, 00:05:47.567 "nvme_admin": false, 00:05:47.567 "nvme_io": false, 00:05:47.567 "nvme_io_md": false, 00:05:47.567 "write_zeroes": true, 00:05:47.567 "zcopy": true, 00:05:47.567 "get_zone_info": false, 00:05:47.567 "zone_management": false, 00:05:47.567 "zone_append": false, 00:05:47.567 "compare": false, 00:05:47.567 "compare_and_write": false, 00:05:47.567 "abort": true, 00:05:47.567 "seek_hole": false, 00:05:47.567 "seek_data": false, 00:05:47.567 "copy": true, 00:05:47.567 "nvme_iov_md": false 00:05:47.567 }, 00:05:47.567 "memory_domains": [ 00:05:47.567 { 00:05:47.567 "dma_device_id": "system", 00:05:47.567 "dma_device_type": 1 00:05:47.567 }, 00:05:47.567 { 00:05:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.567 "dma_device_type": 2 00:05:47.567 } 00:05:47.567 ], 00:05:47.568 "driver_specific": {} 00:05:47.568 } 00:05:47.568 ]' 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 [2024-11-21 03:14:34.924533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.568 [2024-11-21 03:14:34.924612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.568 [2024-11-21 03:14:34.924640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:47.568 [2024-11-21 03:14:34.924653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.568 [2024-11-21 03:14:34.927212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.568 [2024-11-21 03:14:34.927261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.568 Passthru0 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.568 { 00:05:47.568 "name": "Malloc0", 00:05:47.568 "aliases": [ 00:05:47.568 "a343e194-b4d5-4769-840b-93e18c871e9f" 00:05:47.568 ], 00:05:47.568 "product_name": "Malloc disk", 00:05:47.568 "block_size": 512, 00:05:47.568 "num_blocks": 16384, 00:05:47.568 "uuid": "a343e194-b4d5-4769-840b-93e18c871e9f", 00:05:47.568 "assigned_rate_limits": { 00:05:47.568 "rw_ios_per_sec": 0, 00:05:47.568 "rw_mbytes_per_sec": 0, 00:05:47.568 "r_mbytes_per_sec": 0, 00:05:47.568 "w_mbytes_per_sec": 0 00:05:47.568 }, 00:05:47.568 "claimed": true, 00:05:47.568 "claim_type": "exclusive_write", 00:05:47.568 "zoned": false, 00:05:47.568 "supported_io_types": { 00:05:47.568 "read": true, 00:05:47.568 "write": true, 00:05:47.568 "unmap": true, 00:05:47.568 "flush": true, 00:05:47.568 "reset": true, 00:05:47.568 "nvme_admin": false, 00:05:47.568 "nvme_io": false, 00:05:47.568 "nvme_io_md": false, 00:05:47.568 "write_zeroes": true, 00:05:47.568 "zcopy": true, 00:05:47.568 "get_zone_info": false, 00:05:47.568 "zone_management": false, 00:05:47.568 "zone_append": false, 00:05:47.568 "compare": false, 00:05:47.568 "compare_and_write": false, 00:05:47.568 "abort": true, 00:05:47.568 "seek_hole": false, 00:05:47.568 "seek_data": false, 00:05:47.568 "copy": true, 00:05:47.568 "nvme_iov_md": false 00:05:47.568 }, 00:05:47.568 "memory_domains": [ 00:05:47.568 { 00:05:47.568 "dma_device_id": "system", 00:05:47.568 "dma_device_type": 1 00:05:47.568 }, 00:05:47.568 { 00:05:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.568 "dma_device_type": 2 00:05:47.568 } 00:05:47.568 ], 00:05:47.568 "driver_specific": {} 00:05:47.568 }, 00:05:47.568 { 00:05:47.568 "name": "Passthru0", 00:05:47.568 "aliases": [ 00:05:47.568 "aaa50720-d129-5b46-9070-aeb0b76306b8" 00:05:47.568 ], 00:05:47.568 "product_name": "passthru", 00:05:47.568 "block_size": 512, 00:05:47.568 "num_blocks": 16384, 00:05:47.568 "uuid": "aaa50720-d129-5b46-9070-aeb0b76306b8", 00:05:47.568 "assigned_rate_limits": { 00:05:47.568 "rw_ios_per_sec": 0, 00:05:47.568 "rw_mbytes_per_sec": 0, 00:05:47.568 "r_mbytes_per_sec": 0, 00:05:47.568 "w_mbytes_per_sec": 0 00:05:47.568 }, 00:05:47.568 "claimed": false, 00:05:47.568 "zoned": false, 00:05:47.568 "supported_io_types": { 00:05:47.568 "read": true, 00:05:47.568 "write": true, 00:05:47.568 "unmap": true, 00:05:47.568 "flush": true, 00:05:47.568 "reset": true, 00:05:47.568 "nvme_admin": false, 00:05:47.568 "nvme_io": false, 00:05:47.568 "nvme_io_md": false, 00:05:47.568 "write_zeroes": true, 00:05:47.568 "zcopy": true, 00:05:47.568 "get_zone_info": false, 00:05:47.568 "zone_management": false, 00:05:47.568 "zone_append": false, 00:05:47.568 "compare": false, 00:05:47.568 "compare_and_write": false, 00:05:47.568 "abort": true, 00:05:47.568 "seek_hole": false, 00:05:47.568 "seek_data": false, 00:05:47.568 "copy": true, 00:05:47.568 "nvme_iov_md": false 00:05:47.568 }, 00:05:47.568 "memory_domains": [ 00:05:47.568 { 00:05:47.568 "dma_device_id": "system", 00:05:47.568 "dma_device_type": 1 00:05:47.568 }, 00:05:47.568 { 00:05:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.568 "dma_device_type": 2 00:05:47.568 } 00:05:47.568 ], 00:05:47.568 "driver_specific": { 00:05:47.568 "passthru": { 00:05:47.568 "name": "Passthru0", 00:05:47.568 "base_bdev_name": "Malloc0" 00:05:47.568 } 00:05:47.568 } 00:05:47.568 } 00:05:47.568 ]' 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.568 03:14:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.568 03:14:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.568 00:05:47.568 real 0m0.224s 00:05:47.568 user 0m0.128s 00:05:47.568 sys 0m0.034s 00:05:47.568 ************************************ 00:05:47.568 END TEST rpc_integrity 00:05:47.568 ************************************ 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.568 03:14:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.568 03:14:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.568 03:14:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.568 03:14:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 ************************************ 00:05:47.568 START TEST rpc_plugins 00:05:47.568 ************************************ 00:05:47.568 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:47.568 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.568 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.568 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.568 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.568 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.828 { 00:05:47.828 "name": "Malloc1", 00:05:47.828 "aliases": [ 00:05:47.828 "832a2fbc-cbe8-4084-bd16-46a0148bf9ea" 00:05:47.828 ], 00:05:47.828 "product_name": "Malloc disk", 00:05:47.828 "block_size": 4096, 00:05:47.828 "num_blocks": 256, 00:05:47.828 "uuid": "832a2fbc-cbe8-4084-bd16-46a0148bf9ea", 00:05:47.828 "assigned_rate_limits": { 00:05:47.828 "rw_ios_per_sec": 0, 00:05:47.828 "rw_mbytes_per_sec": 0, 00:05:47.828 "r_mbytes_per_sec": 0, 00:05:47.828 "w_mbytes_per_sec": 0 00:05:47.828 }, 00:05:47.828 "claimed": false, 00:05:47.828 "zoned": false, 00:05:47.828 "supported_io_types": { 00:05:47.828 "read": true, 00:05:47.828 "write": true, 00:05:47.828 "unmap": true, 00:05:47.828 "flush": true, 00:05:47.828 "reset": true, 00:05:47.828 "nvme_admin": false, 00:05:47.828 "nvme_io": false, 00:05:47.828 "nvme_io_md": false, 00:05:47.828 "write_zeroes": true, 00:05:47.828 "zcopy": true, 00:05:47.828 "get_zone_info": false, 00:05:47.828 "zone_management": false, 00:05:47.828 "zone_append": false, 00:05:47.828 "compare": false, 00:05:47.828 "compare_and_write": false, 00:05:47.828 "abort": true, 00:05:47.828 "seek_hole": false, 00:05:47.828 "seek_data": false, 00:05:47.828 "copy": true, 00:05:47.828 "nvme_iov_md": false 00:05:47.828 }, 00:05:47.828 "memory_domains": [ 00:05:47.828 { 00:05:47.828 "dma_device_id": "system", 00:05:47.828 "dma_device_type": 1 00:05:47.828 }, 00:05:47.828 { 00:05:47.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.828 "dma_device_type": 2 00:05:47.828 } 00:05:47.828 ], 00:05:47.828 "driver_specific": {} 00:05:47.828 } 00:05:47.828 ]' 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:47.828 03:14:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:47.828 00:05:47.828 real 0m0.112s 00:05:47.828 user 0m0.059s 00:05:47.828 sys 0m0.016s 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.828 03:14:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 ************************************ 00:05:47.828 END TEST rpc_plugins 00:05:47.828 ************************************ 00:05:47.828 03:14:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:47.828 03:14:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.828 03:14:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.828 03:14:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 ************************************ 00:05:47.828 START TEST rpc_trace_cmd_test 00:05:47.828 ************************************ 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.828 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:47.828 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70917", 00:05:47.828 "tpoint_group_mask": "0x8", 00:05:47.828 "iscsi_conn": { 00:05:47.828 "mask": "0x2", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "scsi": { 00:05:47.828 "mask": "0x4", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "bdev": { 00:05:47.828 "mask": "0x8", 00:05:47.828 "tpoint_mask": "0xffffffffffffffff" 00:05:47.828 }, 00:05:47.828 "nvmf_rdma": { 00:05:47.828 "mask": "0x10", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "nvmf_tcp": { 00:05:47.828 "mask": "0x20", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "ftl": { 00:05:47.828 "mask": "0x40", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "blobfs": { 00:05:47.828 "mask": "0x80", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "dsa": { 00:05:47.828 "mask": "0x200", 00:05:47.828 "tpoint_mask": "0x0" 00:05:47.828 }, 00:05:47.828 "thread": { 00:05:47.829 "mask": "0x400", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "nvme_pcie": { 00:05:47.829 "mask": "0x800", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "iaa": { 00:05:47.829 "mask": "0x1000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "nvme_tcp": { 00:05:47.829 "mask": "0x2000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "bdev_nvme": { 00:05:47.829 "mask": "0x4000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "sock": { 00:05:47.829 "mask": "0x8000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "blob": { 00:05:47.829 "mask": "0x10000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "bdev_raid": { 00:05:47.829 "mask": "0x20000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 }, 00:05:47.829 "scheduler": { 00:05:47.829 "mask": "0x40000", 00:05:47.829 "tpoint_mask": "0x0" 00:05:47.829 } 00:05:47.829 }' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.829 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.087 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.087 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.087 03:14:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.087 00:05:48.087 real 0m0.160s 00:05:48.087 user 0m0.129s 00:05:48.087 sys 0m0.021s 00:05:48.087 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.087 ************************************ 00:05:48.087 END TEST rpc_trace_cmd_test 00:05:48.087 ************************************ 00:05:48.087 03:14:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 03:14:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.087 03:14:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.087 03:14:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.087 03:14:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.087 03:14:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.087 03:14:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 ************************************ 00:05:48.087 START TEST rpc_daemon_integrity 00:05:48.087 ************************************ 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.087 { 00:05:48.087 "name": "Malloc2", 00:05:48.087 "aliases": [ 00:05:48.087 "0d11703a-8117-4682-9742-bfe664280424" 00:05:48.087 ], 00:05:48.087 "product_name": "Malloc disk", 00:05:48.087 "block_size": 512, 00:05:48.087 "num_blocks": 16384, 00:05:48.087 "uuid": "0d11703a-8117-4682-9742-bfe664280424", 00:05:48.087 "assigned_rate_limits": { 00:05:48.087 "rw_ios_per_sec": 0, 00:05:48.087 "rw_mbytes_per_sec": 0, 00:05:48.087 "r_mbytes_per_sec": 0, 00:05:48.087 "w_mbytes_per_sec": 0 00:05:48.087 }, 00:05:48.087 "claimed": false, 00:05:48.087 "zoned": false, 00:05:48.087 "supported_io_types": { 00:05:48.087 "read": true, 00:05:48.087 "write": true, 00:05:48.087 "unmap": true, 00:05:48.087 "flush": true, 00:05:48.087 "reset": true, 00:05:48.087 "nvme_admin": false, 00:05:48.087 "nvme_io": false, 00:05:48.087 "nvme_io_md": false, 00:05:48.087 "write_zeroes": true, 00:05:48.087 "zcopy": true, 00:05:48.087 "get_zone_info": false, 00:05:48.087 "zone_management": false, 00:05:48.087 "zone_append": false, 00:05:48.087 "compare": false, 00:05:48.087 "compare_and_write": false, 00:05:48.087 "abort": true, 00:05:48.087 "seek_hole": false, 00:05:48.087 "seek_data": false, 00:05:48.087 "copy": true, 00:05:48.087 "nvme_iov_md": false 00:05:48.087 }, 00:05:48.087 "memory_domains": [ 00:05:48.087 { 00:05:48.087 "dma_device_id": "system", 00:05:48.087 "dma_device_type": 1 00:05:48.087 }, 00:05:48.087 { 00:05:48.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.087 "dma_device_type": 2 00:05:48.087 } 00:05:48.087 ], 00:05:48.087 "driver_specific": {} 00:05:48.087 } 00:05:48.087 ]' 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.087 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 [2024-11-21 03:14:35.612933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.087 [2024-11-21 03:14:35.613000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.088 [2024-11-21 03:14:35.613020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:48.088 [2024-11-21 03:14:35.613031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.088 [2024-11-21 03:14:35.615266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.088 [2024-11-21 03:14:35.615307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.088 Passthru0 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.088 { 00:05:48.088 "name": "Malloc2", 00:05:48.088 "aliases": [ 00:05:48.088 "0d11703a-8117-4682-9742-bfe664280424" 00:05:48.088 ], 00:05:48.088 "product_name": "Malloc disk", 00:05:48.088 "block_size": 512, 00:05:48.088 "num_blocks": 16384, 00:05:48.088 "uuid": "0d11703a-8117-4682-9742-bfe664280424", 00:05:48.088 "assigned_rate_limits": { 00:05:48.088 "rw_ios_per_sec": 0, 00:05:48.088 "rw_mbytes_per_sec": 0, 00:05:48.088 "r_mbytes_per_sec": 0, 00:05:48.088 "w_mbytes_per_sec": 0 00:05:48.088 }, 00:05:48.088 "claimed": true, 00:05:48.088 "claim_type": "exclusive_write", 00:05:48.088 "zoned": false, 00:05:48.088 "supported_io_types": { 00:05:48.088 "read": true, 00:05:48.088 "write": true, 00:05:48.088 "unmap": true, 00:05:48.088 "flush": true, 00:05:48.088 "reset": true, 00:05:48.088 "nvme_admin": false, 00:05:48.088 "nvme_io": false, 00:05:48.088 "nvme_io_md": false, 00:05:48.088 "write_zeroes": true, 00:05:48.088 "zcopy": true, 00:05:48.088 "get_zone_info": false, 00:05:48.088 "zone_management": false, 00:05:48.088 "zone_append": false, 00:05:48.088 "compare": false, 00:05:48.088 "compare_and_write": false, 00:05:48.088 "abort": true, 00:05:48.088 "seek_hole": false, 00:05:48.088 "seek_data": false, 00:05:48.088 "copy": true, 00:05:48.088 "nvme_iov_md": false 00:05:48.088 }, 00:05:48.088 "memory_domains": [ 00:05:48.088 { 00:05:48.088 "dma_device_id": "system", 00:05:48.088 "dma_device_type": 1 00:05:48.088 }, 00:05:48.088 { 00:05:48.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.088 "dma_device_type": 2 00:05:48.088 } 00:05:48.088 ], 00:05:48.088 "driver_specific": {} 00:05:48.088 }, 00:05:48.088 { 00:05:48.088 "name": "Passthru0", 00:05:48.088 "aliases": [ 00:05:48.088 "2648a0c3-d18e-5242-9f6b-64507d23f20c" 00:05:48.088 ], 00:05:48.088 "product_name": "passthru", 00:05:48.088 "block_size": 512, 00:05:48.088 "num_blocks": 16384, 00:05:48.088 "uuid": "2648a0c3-d18e-5242-9f6b-64507d23f20c", 00:05:48.088 "assigned_rate_limits": { 00:05:48.088 "rw_ios_per_sec": 0, 00:05:48.088 "rw_mbytes_per_sec": 0, 00:05:48.088 "r_mbytes_per_sec": 0, 00:05:48.088 "w_mbytes_per_sec": 0 00:05:48.088 }, 00:05:48.088 "claimed": false, 00:05:48.088 "zoned": false, 00:05:48.088 "supported_io_types": { 00:05:48.088 "read": true, 00:05:48.088 "write": true, 00:05:48.088 "unmap": true, 00:05:48.088 "flush": true, 00:05:48.088 "reset": true, 00:05:48.088 "nvme_admin": false, 00:05:48.088 "nvme_io": false, 00:05:48.088 "nvme_io_md": false, 00:05:48.088 "write_zeroes": true, 00:05:48.088 "zcopy": true, 00:05:48.088 "get_zone_info": false, 00:05:48.088 "zone_management": false, 00:05:48.088 "zone_append": false, 00:05:48.088 "compare": false, 00:05:48.088 "compare_and_write": false, 00:05:48.088 "abort": true, 00:05:48.088 "seek_hole": false, 00:05:48.088 "seek_data": false, 00:05:48.088 "copy": true, 00:05:48.088 "nvme_iov_md": false 00:05:48.088 }, 00:05:48.088 "memory_domains": [ 00:05:48.088 { 00:05:48.088 "dma_device_id": "system", 00:05:48.088 "dma_device_type": 1 00:05:48.088 }, 00:05:48.088 { 00:05:48.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.088 "dma_device_type": 2 00:05:48.088 } 00:05:48.088 ], 00:05:48.088 "driver_specific": { 00:05:48.088 "passthru": { 00:05:48.088 "name": "Passthru0", 00:05:48.088 "base_bdev_name": "Malloc2" 00:05:48.088 } 00:05:48.088 } 00:05:48.088 } 00:05:48.088 ]' 00:05:48.088 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.347 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.348 00:05:48.348 real 0m0.232s 00:05:48.348 user 0m0.133s 00:05:48.348 sys 0m0.032s 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.348 ************************************ 00:05:48.348 END TEST rpc_daemon_integrity 00:05:48.348 ************************************ 00:05:48.348 03:14:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.348 03:14:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.348 03:14:35 rpc -- rpc/rpc.sh@84 -- # killprocess 70917 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 70917 ']' 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@958 -- # kill -0 70917 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70917 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.348 killing process with pid 70917 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70917' 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@973 -- # kill 70917 00:05:48.348 03:14:35 rpc -- common/autotest_common.sh@978 -- # wait 70917 00:05:48.918 00:05:48.918 real 0m2.460s 00:05:48.918 user 0m2.816s 00:05:48.918 sys 0m0.672s 00:05:48.918 03:14:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.918 ************************************ 00:05:48.918 END TEST rpc 00:05:48.918 ************************************ 00:05:48.918 03:14:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.918 03:14:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.918 03:14:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.918 03:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.918 03:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.918 ************************************ 00:05:48.918 START TEST skip_rpc 00:05:48.918 ************************************ 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.918 * Looking for test storage... 00:05:48.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.918 03:14:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.918 --rc genhtml_branch_coverage=1 00:05:48.918 --rc genhtml_function_coverage=1 00:05:48.918 --rc genhtml_legend=1 00:05:48.918 --rc geninfo_all_blocks=1 00:05:48.918 --rc geninfo_unexecuted_blocks=1 00:05:48.918 00:05:48.918 ' 00:05:48.918 03:14:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.918 03:14:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.918 03:14:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.918 03:14:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.918 ************************************ 00:05:48.918 START TEST skip_rpc 00:05:48.918 ************************************ 00:05:48.918 03:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:48.918 03:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71118 00:05:48.918 03:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.918 03:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.918 03:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.179 [2024-11-21 03:14:36.501615] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:05:49.179 [2024-11-21 03:14:36.501767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:05:49.179 [2024-11-21 03:14:36.638973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.179 [2024-11-21 03:14:36.671084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.179 [2024-11-21 03:14:36.695343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71118 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71118 ']' 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71118 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71118 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.467 killing process with pid 71118 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71118' 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71118 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71118 00:05:54.467 00:05:54.467 real 0m5.257s 00:05:54.467 user 0m4.880s 00:05:54.467 sys 0m0.277s 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.467 03:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 ************************************ 00:05:54.467 END TEST skip_rpc 00:05:54.467 ************************************ 00:05:54.467 03:14:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.467 03:14:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.467 03:14:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.467 03:14:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 ************************************ 00:05:54.467 START TEST skip_rpc_with_json 00:05:54.467 ************************************ 00:05:54.467 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:54.467 03:14:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.467 03:14:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71200 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71200 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71200 ']' 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.468 03:14:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.468 [2024-11-21 03:14:41.778236] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:05:54.468 [2024-11-21 03:14:41.778327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71200 ] 00:05:54.468 [2024-11-21 03:14:41.903790] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.468 [2024-11-21 03:14:41.927289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.468 [2024-11-21 03:14:41.944017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.408 [2024-11-21 03:14:42.624452] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.408 request: 00:05:55.408 { 00:05:55.408 "trtype": "tcp", 00:05:55.408 "method": "nvmf_get_transports", 00:05:55.408 "req_id": 1 00:05:55.408 } 00:05:55.408 Got JSON-RPC error response 00:05:55.408 response: 00:05:55.408 { 00:05:55.408 "code": -19, 00:05:55.408 "message": "No such device" 00:05:55.408 } 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.408 [2024-11-21 03:14:42.636512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.408 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.408 { 00:05:55.408 "subsystems": [ 00:05:55.408 { 00:05:55.408 "subsystem": "fsdev", 00:05:55.408 "config": [ 00:05:55.408 { 00:05:55.408 "method": "fsdev_set_opts", 00:05:55.408 "params": { 00:05:55.408 "fsdev_io_pool_size": 65535, 00:05:55.408 "fsdev_io_cache_size": 256 00:05:55.408 } 00:05:55.408 } 00:05:55.408 ] 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "subsystem": "keyring", 00:05:55.408 "config": [] 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "subsystem": "iobuf", 00:05:55.408 "config": [ 00:05:55.408 { 00:05:55.408 "method": "iobuf_set_options", 00:05:55.408 "params": { 00:05:55.408 "small_pool_count": 8192, 00:05:55.408 "large_pool_count": 1024, 00:05:55.408 "small_bufsize": 8192, 00:05:55.408 "large_bufsize": 135168, 00:05:55.408 "enable_numa": false 00:05:55.408 } 00:05:55.408 } 00:05:55.408 ] 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "subsystem": "sock", 00:05:55.408 "config": [ 00:05:55.408 { 00:05:55.408 "method": "sock_set_default_impl", 00:05:55.408 "params": { 00:05:55.408 "impl_name": "posix" 00:05:55.408 } 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "method": "sock_impl_set_options", 00:05:55.408 "params": { 00:05:55.408 "impl_name": "ssl", 00:05:55.408 "recv_buf_size": 4096, 00:05:55.408 "send_buf_size": 4096, 00:05:55.408 "enable_recv_pipe": true, 00:05:55.408 "enable_quickack": false, 00:05:55.408 "enable_placement_id": 0, 00:05:55.408 "enable_zerocopy_send_server": true, 00:05:55.408 "enable_zerocopy_send_client": false, 00:05:55.408 "zerocopy_threshold": 0, 00:05:55.408 "tls_version": 0, 00:05:55.408 "enable_ktls": false 00:05:55.408 } 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "method": "sock_impl_set_options", 00:05:55.408 "params": { 00:05:55.408 "impl_name": "posix", 00:05:55.408 "recv_buf_size": 2097152, 00:05:55.408 "send_buf_size": 2097152, 00:05:55.408 "enable_recv_pipe": true, 00:05:55.408 "enable_quickack": false, 00:05:55.408 "enable_placement_id": 0, 00:05:55.408 "enable_zerocopy_send_server": true, 00:05:55.408 "enable_zerocopy_send_client": false, 00:05:55.408 "zerocopy_threshold": 0, 00:05:55.408 "tls_version": 0, 00:05:55.408 "enable_ktls": false 00:05:55.408 } 00:05:55.408 } 00:05:55.408 ] 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "subsystem": "vmd", 00:05:55.408 "config": [] 00:05:55.408 }, 00:05:55.408 { 00:05:55.408 "subsystem": "accel", 00:05:55.408 "config": [ 00:05:55.408 { 00:05:55.408 "method": "accel_set_options", 00:05:55.408 "params": { 00:05:55.408 "small_cache_size": 128, 00:05:55.408 "large_cache_size": 16, 00:05:55.408 "task_count": 2048, 00:05:55.408 "sequence_count": 2048, 00:05:55.409 "buf_count": 2048 00:05:55.409 } 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "bdev", 00:05:55.409 "config": [ 00:05:55.409 { 00:05:55.409 "method": "bdev_set_options", 00:05:55.409 "params": { 00:05:55.409 "bdev_io_pool_size": 65535, 00:05:55.409 "bdev_io_cache_size": 256, 00:05:55.409 "bdev_auto_examine": true, 00:05:55.409 "iobuf_small_cache_size": 128, 00:05:55.409 "iobuf_large_cache_size": 16 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "bdev_raid_set_options", 00:05:55.409 "params": { 00:05:55.409 "process_window_size_kb": 1024, 00:05:55.409 "process_max_bandwidth_mb_sec": 0 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "bdev_iscsi_set_options", 00:05:55.409 "params": { 00:05:55.409 "timeout_sec": 30 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "bdev_nvme_set_options", 00:05:55.409 "params": { 00:05:55.409 "action_on_timeout": "none", 00:05:55.409 "timeout_us": 0, 00:05:55.409 "timeout_admin_us": 0, 00:05:55.409 "keep_alive_timeout_ms": 10000, 00:05:55.409 "arbitration_burst": 0, 00:05:55.409 "low_priority_weight": 0, 00:05:55.409 "medium_priority_weight": 0, 00:05:55.409 "high_priority_weight": 0, 00:05:55.409 "nvme_adminq_poll_period_us": 10000, 00:05:55.409 "nvme_ioq_poll_period_us": 0, 00:05:55.409 "io_queue_requests": 0, 00:05:55.409 "delay_cmd_submit": true, 00:05:55.409 "transport_retry_count": 4, 00:05:55.409 "bdev_retry_count": 3, 00:05:55.409 "transport_ack_timeout": 0, 00:05:55.409 "ctrlr_loss_timeout_sec": 0, 00:05:55.409 "reconnect_delay_sec": 0, 00:05:55.409 "fast_io_fail_timeout_sec": 0, 00:05:55.409 "disable_auto_failback": false, 00:05:55.409 "generate_uuids": false, 00:05:55.409 "transport_tos": 0, 00:05:55.409 "nvme_error_stat": false, 00:05:55.409 "rdma_srq_size": 0, 00:05:55.409 "io_path_stat": false, 00:05:55.409 "allow_accel_sequence": false, 00:05:55.409 "rdma_max_cq_size": 0, 00:05:55.409 "rdma_cm_event_timeout_ms": 0, 00:05:55.409 "dhchap_digests": [ 00:05:55.409 "sha256", 00:05:55.409 "sha384", 00:05:55.409 "sha512" 00:05:55.409 ], 00:05:55.409 "dhchap_dhgroups": [ 00:05:55.409 "null", 00:05:55.409 "ffdhe2048", 00:05:55.409 "ffdhe3072", 00:05:55.409 "ffdhe4096", 00:05:55.409 "ffdhe6144", 00:05:55.409 "ffdhe8192" 00:05:55.409 ] 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "bdev_nvme_set_hotplug", 00:05:55.409 "params": { 00:05:55.409 "period_us": 100000, 00:05:55.409 "enable": false 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "bdev_wait_for_examine" 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "scsi", 00:05:55.409 "config": null 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "scheduler", 00:05:55.409 "config": [ 00:05:55.409 { 00:05:55.409 "method": "framework_set_scheduler", 00:05:55.409 "params": { 00:05:55.409 "name": "static" 00:05:55.409 } 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "vhost_scsi", 00:05:55.409 "config": [] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "vhost_blk", 00:05:55.409 "config": [] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "ublk", 00:05:55.409 "config": [] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "nbd", 00:05:55.409 "config": [] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "nvmf", 00:05:55.409 "config": [ 00:05:55.409 { 00:05:55.409 "method": "nvmf_set_config", 00:05:55.409 "params": { 00:05:55.409 "discovery_filter": "match_any", 00:05:55.409 "admin_cmd_passthru": { 00:05:55.409 "identify_ctrlr": false 00:05:55.409 }, 00:05:55.409 "dhchap_digests": [ 00:05:55.409 "sha256", 00:05:55.409 "sha384", 00:05:55.409 "sha512" 00:05:55.409 ], 00:05:55.409 "dhchap_dhgroups": [ 00:05:55.409 "null", 00:05:55.409 "ffdhe2048", 00:05:55.409 "ffdhe3072", 00:05:55.409 "ffdhe4096", 00:05:55.409 "ffdhe6144", 00:05:55.409 "ffdhe8192" 00:05:55.409 ] 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "nvmf_set_max_subsystems", 00:05:55.409 "params": { 00:05:55.409 "max_subsystems": 1024 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "nvmf_set_crdt", 00:05:55.409 "params": { 00:05:55.409 "crdt1": 0, 00:05:55.409 "crdt2": 0, 00:05:55.409 "crdt3": 0 00:05:55.409 } 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "method": "nvmf_create_transport", 00:05:55.409 "params": { 00:05:55.409 "trtype": "TCP", 00:05:55.409 "max_queue_depth": 128, 00:05:55.409 "max_io_qpairs_per_ctrlr": 127, 00:05:55.409 "in_capsule_data_size": 4096, 00:05:55.409 "max_io_size": 131072, 00:05:55.409 "io_unit_size": 131072, 00:05:55.409 "max_aq_depth": 128, 00:05:55.409 "num_shared_buffers": 511, 00:05:55.409 "buf_cache_size": 4294967295, 00:05:55.409 "dif_insert_or_strip": false, 00:05:55.409 "zcopy": false, 00:05:55.409 "c2h_success": true, 00:05:55.409 "sock_priority": 0, 00:05:55.409 "abort_timeout_sec": 1, 00:05:55.409 "ack_timeout": 0, 00:05:55.409 "data_wr_pool_size": 0 00:05:55.409 } 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 }, 00:05:55.409 { 00:05:55.409 "subsystem": "iscsi", 00:05:55.409 "config": [ 00:05:55.409 { 00:05:55.409 "method": "iscsi_set_options", 00:05:55.409 "params": { 00:05:55.409 "node_base": "iqn.2016-06.io.spdk", 00:05:55.409 "max_sessions": 128, 00:05:55.409 "max_connections_per_session": 2, 00:05:55.409 "max_queue_depth": 64, 00:05:55.409 "default_time2wait": 2, 00:05:55.409 "default_time2retain": 20, 00:05:55.409 "first_burst_length": 8192, 00:05:55.409 "immediate_data": true, 00:05:55.409 "allow_duplicated_isid": false, 00:05:55.409 "error_recovery_level": 0, 00:05:55.409 "nop_timeout": 60, 00:05:55.409 "nop_in_interval": 30, 00:05:55.409 "disable_chap": false, 00:05:55.409 "require_chap": false, 00:05:55.409 "mutual_chap": false, 00:05:55.409 "chap_group": 0, 00:05:55.409 "max_large_datain_per_connection": 64, 00:05:55.409 "max_r2t_per_connection": 4, 00:05:55.409 "pdu_pool_size": 36864, 00:05:55.409 "immediate_data_pool_size": 16384, 00:05:55.409 "data_out_pool_size": 2048 00:05:55.409 } 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 } 00:05:55.409 ] 00:05:55.409 } 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71200 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71200 ']' 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71200 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71200 00:05:55.409 killing process with pid 71200 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71200' 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71200 00:05:55.409 03:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71200 00:05:55.667 03:14:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.668 03:14:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71229 00:05:55.668 03:14:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71229 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71229 ']' 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71229 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71229 00:06:00.938 killing process with pid 71229 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71229' 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71229 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71229 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.938 00:06:00.938 real 0m6.576s 00:06:00.938 user 0m6.305s 00:06:00.938 sys 0m0.498s 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.938 ************************************ 00:06:00.938 END TEST skip_rpc_with_json 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.938 ************************************ 00:06:00.938 03:14:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.938 03:14:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.938 03:14:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.938 03:14:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.938 ************************************ 00:06:00.938 START TEST skip_rpc_with_delay 00:06:00.938 ************************************ 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.938 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.939 [2024-11-21 03:14:48.396188] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.939 00:06:00.939 real 0m0.105s 00:06:00.939 user 0m0.058s 00:06:00.939 sys 0m0.047s 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.939 ************************************ 00:06:00.939 END TEST skip_rpc_with_delay 00:06:00.939 ************************************ 00:06:00.939 03:14:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.939 03:14:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.939 03:14:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.939 03:14:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.939 03:14:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.939 03:14:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.939 03:14:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.939 ************************************ 00:06:00.939 START TEST exit_on_failed_rpc_init 00:06:00.939 ************************************ 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71335 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71335 00:06:00.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.939 03:14:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.196 [2024-11-21 03:14:48.556624] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:01.196 [2024-11-21 03:14:48.556736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:06:01.196 [2024-11-21 03:14:48.688827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.196 [2024-11-21 03:14:48.714603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.196 [2024-11-21 03:14:48.732084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.138 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.138 [2024-11-21 03:14:49.463855] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:02.138 [2024-11-21 03:14:49.463989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71353 ] 00:06:02.138 [2024-11-21 03:14:49.595480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.138 [2024-11-21 03:14:49.623931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.138 [2024-11-21 03:14:49.643108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.138 [2024-11-21 03:14:49.643184] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.138 [2024-11-21 03:14:49.643197] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.138 [2024-11-21 03:14:49.643207] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71335 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:06:02.396 killing process with pid 71335 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71335 00:06:02.396 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71335 00:06:02.654 ************************************ 00:06:02.654 END TEST exit_on_failed_rpc_init 00:06:02.654 ************************************ 00:06:02.654 00:06:02.654 real 0m1.481s 00:06:02.654 user 0m1.621s 00:06:02.654 sys 0m0.388s 00:06:02.654 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.654 03:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 03:14:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.654 00:06:02.654 real 0m13.742s 00:06:02.654 user 0m13.017s 00:06:02.654 sys 0m1.372s 00:06:02.654 ************************************ 00:06:02.654 END TEST skip_rpc 00:06:02.654 ************************************ 00:06:02.654 03:14:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.655 03:14:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.655 03:14:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.655 03:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.655 03:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.655 03:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.655 ************************************ 00:06:02.655 START TEST rpc_client 00:06:02.655 ************************************ 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.655 * Looking for test storage... 00:06:02.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.655 03:14:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.655 --rc genhtml_branch_coverage=1 00:06:02.655 --rc genhtml_function_coverage=1 00:06:02.655 --rc genhtml_legend=1 00:06:02.655 --rc geninfo_all_blocks=1 00:06:02.655 --rc geninfo_unexecuted_blocks=1 00:06:02.655 00:06:02.655 ' 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.655 --rc genhtml_branch_coverage=1 00:06:02.655 --rc genhtml_function_coverage=1 00:06:02.655 --rc genhtml_legend=1 00:06:02.655 --rc geninfo_all_blocks=1 00:06:02.655 --rc geninfo_unexecuted_blocks=1 00:06:02.655 00:06:02.655 ' 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.655 --rc genhtml_branch_coverage=1 00:06:02.655 --rc genhtml_function_coverage=1 00:06:02.655 --rc genhtml_legend=1 00:06:02.655 --rc geninfo_all_blocks=1 00:06:02.655 --rc geninfo_unexecuted_blocks=1 00:06:02.655 00:06:02.655 ' 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.655 --rc genhtml_branch_coverage=1 00:06:02.655 --rc genhtml_function_coverage=1 00:06:02.655 --rc genhtml_legend=1 00:06:02.655 --rc geninfo_all_blocks=1 00:06:02.655 --rc geninfo_unexecuted_blocks=1 00:06:02.655 00:06:02.655 ' 00:06:02.655 03:14:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.655 OK 00:06:02.655 03:14:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.655 00:06:02.655 real 0m0.173s 00:06:02.655 user 0m0.096s 00:06:02.655 sys 0m0.084s 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.655 03:14:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.655 ************************************ 00:06:02.655 END TEST rpc_client 00:06:02.655 ************************************ 00:06:02.914 03:14:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.914 03:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.914 03:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.914 03:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.914 ************************************ 00:06:02.914 START TEST json_config 00:06:02.914 ************************************ 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.914 03:14:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.914 03:14:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.914 03:14:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.914 03:14:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.914 03:14:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.914 03:14:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:02.914 03:14:50 json_config -- scripts/common.sh@345 -- # : 1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.914 03:14:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.914 03:14:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@353 -- # local d=1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.914 03:14:50 json_config -- scripts/common.sh@355 -- # echo 1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.914 03:14:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@353 -- # local d=2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.914 03:14:50 json_config -- scripts/common.sh@355 -- # echo 2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.914 03:14:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.914 03:14:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.914 03:14:50 json_config -- scripts/common.sh@368 -- # return 0 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.914 --rc genhtml_branch_coverage=1 00:06:02.914 --rc genhtml_function_coverage=1 00:06:02.914 --rc genhtml_legend=1 00:06:02.914 --rc geninfo_all_blocks=1 00:06:02.914 --rc geninfo_unexecuted_blocks=1 00:06:02.914 00:06:02.914 ' 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.914 --rc genhtml_branch_coverage=1 00:06:02.914 --rc genhtml_function_coverage=1 00:06:02.914 --rc genhtml_legend=1 00:06:02.914 --rc geninfo_all_blocks=1 00:06:02.914 --rc geninfo_unexecuted_blocks=1 00:06:02.914 00:06:02.914 ' 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.914 --rc genhtml_branch_coverage=1 00:06:02.914 --rc genhtml_function_coverage=1 00:06:02.914 --rc genhtml_legend=1 00:06:02.914 --rc geninfo_all_blocks=1 00:06:02.914 --rc geninfo_unexecuted_blocks=1 00:06:02.914 00:06:02.914 ' 00:06:02.914 03:14:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.914 --rc genhtml_branch_coverage=1 00:06:02.914 --rc genhtml_function_coverage=1 00:06:02.914 --rc genhtml_legend=1 00:06:02.914 --rc geninfo_all_blocks=1 00:06:02.914 --rc geninfo_unexecuted_blocks=1 00:06:02.914 00:06:02.914 ' 00:06:02.914 03:14:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55306210-7915-454d-a0b9-4b0948f508be 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=55306210-7915-454d-a0b9-4b0948f508be 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.914 03:14:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.914 03:14:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.914 03:14:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.914 03:14:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.914 03:14:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.914 03:14:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.914 03:14:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.914 03:14:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.914 03:14:50 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.914 03:14:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@51 -- # : 0 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.915 03:14:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.915 WARNING: No tests are enabled so not running JSON configuration tests 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:02.915 03:14:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:02.915 00:06:02.915 real 0m0.141s 00:06:02.915 user 0m0.090s 00:06:02.915 sys 0m0.050s 00:06:02.915 03:14:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.915 03:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.915 ************************************ 00:06:02.915 END TEST json_config 00:06:02.915 ************************************ 00:06:02.915 03:14:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.915 03:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.915 03:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.915 03:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.915 ************************************ 00:06:02.915 START TEST json_config_extra_key 00:06:02.915 ************************************ 00:06:02.915 03:14:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.915 03:14:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.915 03:14:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.915 03:14:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.173 --rc genhtml_branch_coverage=1 00:06:03.173 --rc genhtml_function_coverage=1 00:06:03.173 --rc genhtml_legend=1 00:06:03.173 --rc geninfo_all_blocks=1 00:06:03.173 --rc geninfo_unexecuted_blocks=1 00:06:03.173 00:06:03.173 ' 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.173 --rc genhtml_branch_coverage=1 00:06:03.173 --rc genhtml_function_coverage=1 00:06:03.173 --rc genhtml_legend=1 00:06:03.173 --rc geninfo_all_blocks=1 00:06:03.173 --rc geninfo_unexecuted_blocks=1 00:06:03.173 00:06:03.173 ' 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.173 --rc genhtml_branch_coverage=1 00:06:03.173 --rc genhtml_function_coverage=1 00:06:03.173 --rc genhtml_legend=1 00:06:03.173 --rc geninfo_all_blocks=1 00:06:03.173 --rc geninfo_unexecuted_blocks=1 00:06:03.173 00:06:03.173 ' 00:06:03.173 03:14:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.173 --rc genhtml_branch_coverage=1 00:06:03.173 --rc genhtml_function_coverage=1 00:06:03.173 --rc genhtml_legend=1 00:06:03.173 --rc geninfo_all_blocks=1 00:06:03.173 --rc geninfo_unexecuted_blocks=1 00:06:03.173 00:06:03.173 ' 00:06:03.173 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55306210-7915-454d-a0b9-4b0948f508be 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=55306210-7915-454d-a0b9-4b0948f508be 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.173 03:14:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.173 03:14:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.174 03:14:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.174 03:14:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.174 03:14:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.174 03:14:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.174 03:14:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.174 03:14:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:03.174 03:14:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.174 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.174 03:14:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.174 INFO: launching applications... 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.174 03:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71530 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.174 Waiting for target to run... 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71530 /var/tmp/spdk_tgt.sock 00:06:03.174 03:14:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71530 ']' 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.174 03:14:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.174 [2024-11-21 03:14:50.631241] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:03.174 [2024-11-21 03:14:50.631742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71530 ] 00:06:03.431 [2024-11-21 03:14:50.925500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.431 [2024-11-21 03:14:50.953877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.431 [2024-11-21 03:14:50.964322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.997 03:14:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.997 03:14:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:03.997 00:06:03.997 INFO: shutting down applications... 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:03.997 03:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.997 03:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71530 ]] 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71530 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71530 00:06:03.997 03:14:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71530 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.563 SPDK target shutdown done 00:06:04.563 Success 00:06:04.563 03:14:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.563 03:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.563 00:06:04.563 real 0m1.546s 00:06:04.563 user 0m1.207s 00:06:04.563 sys 0m0.354s 00:06:04.563 ************************************ 00:06:04.563 END TEST json_config_extra_key 00:06:04.563 ************************************ 00:06:04.563 03:14:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.563 03:14:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 03:14:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.563 03:14:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.563 03:14:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.563 03:14:51 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 ************************************ 00:06:04.563 START TEST alias_rpc 00:06:04.563 ************************************ 00:06:04.563 03:14:52 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.563 * Looking for test storage... 00:06:04.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.563 03:14:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.563 03:14:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.563 03:14:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.821 03:14:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.821 03:14:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.821 03:14:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.822 03:14:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.822 --rc genhtml_branch_coverage=1 00:06:04.822 --rc genhtml_function_coverage=1 00:06:04.822 --rc genhtml_legend=1 00:06:04.822 --rc geninfo_all_blocks=1 00:06:04.822 --rc geninfo_unexecuted_blocks=1 00:06:04.822 00:06:04.822 ' 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.822 --rc genhtml_branch_coverage=1 00:06:04.822 --rc genhtml_function_coverage=1 00:06:04.822 --rc genhtml_legend=1 00:06:04.822 --rc geninfo_all_blocks=1 00:06:04.822 --rc geninfo_unexecuted_blocks=1 00:06:04.822 00:06:04.822 ' 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.822 --rc genhtml_branch_coverage=1 00:06:04.822 --rc genhtml_function_coverage=1 00:06:04.822 --rc genhtml_legend=1 00:06:04.822 --rc geninfo_all_blocks=1 00:06:04.822 --rc geninfo_unexecuted_blocks=1 00:06:04.822 00:06:04.822 ' 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.822 --rc genhtml_branch_coverage=1 00:06:04.822 --rc genhtml_function_coverage=1 00:06:04.822 --rc genhtml_legend=1 00:06:04.822 --rc geninfo_all_blocks=1 00:06:04.822 --rc geninfo_unexecuted_blocks=1 00:06:04.822 00:06:04.822 ' 00:06:04.822 03:14:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.822 03:14:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71603 00:06:04.822 03:14:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71603 00:06:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71603 ']' 00:06:04.822 03:14:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.822 03:14:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 [2024-11-21 03:14:52.220930] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:04.822 [2024-11-21 03:14:52.221162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 00:06:04.822 [2024-11-21 03:14:52.352889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.088 [2024-11-21 03:14:52.383499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.088 [2024-11-21 03:14:52.402832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.668 03:14:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.668 03:14:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.668 03:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:05.926 03:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71603 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71603 ']' 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71603 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71603 00:06:05.926 killing process with pid 71603 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71603' 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 71603 00:06:05.926 03:14:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 71603 00:06:06.184 ************************************ 00:06:06.184 END TEST alias_rpc 00:06:06.184 ************************************ 00:06:06.184 00:06:06.184 real 0m1.536s 00:06:06.184 user 0m1.647s 00:06:06.184 sys 0m0.379s 00:06:06.184 03:14:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.184 03:14:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.184 03:14:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.184 03:14:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.184 03:14:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.184 03:14:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.184 03:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.184 ************************************ 00:06:06.184 START TEST spdkcli_tcp 00:06:06.184 ************************************ 00:06:06.184 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.184 * Looking for test storage... 00:06:06.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.184 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.184 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.184 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.442 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.442 03:14:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.442 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.442 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.442 --rc genhtml_branch_coverage=1 00:06:06.442 --rc genhtml_function_coverage=1 00:06:06.442 --rc genhtml_legend=1 00:06:06.442 --rc geninfo_all_blocks=1 00:06:06.442 --rc geninfo_unexecuted_blocks=1 00:06:06.442 00:06:06.442 ' 00:06:06.442 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.442 --rc genhtml_branch_coverage=1 00:06:06.442 --rc genhtml_function_coverage=1 00:06:06.442 --rc genhtml_legend=1 00:06:06.442 --rc geninfo_all_blocks=1 00:06:06.442 --rc geninfo_unexecuted_blocks=1 00:06:06.443 00:06:06.443 ' 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.443 --rc genhtml_branch_coverage=1 00:06:06.443 --rc genhtml_function_coverage=1 00:06:06.443 --rc genhtml_legend=1 00:06:06.443 --rc geninfo_all_blocks=1 00:06:06.443 --rc geninfo_unexecuted_blocks=1 00:06:06.443 00:06:06.443 ' 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.443 --rc genhtml_branch_coverage=1 00:06:06.443 --rc genhtml_function_coverage=1 00:06:06.443 --rc genhtml_legend=1 00:06:06.443 --rc geninfo_all_blocks=1 00:06:06.443 --rc geninfo_unexecuted_blocks=1 00:06:06.443 00:06:06.443 ' 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71688 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.443 03:14:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71688 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71688 ']' 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.443 03:14:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.443 [2024-11-21 03:14:53.840220] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:06.443 [2024-11-21 03:14:53.840505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71688 ] 00:06:06.443 [2024-11-21 03:14:53.974586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.701 [2024-11-21 03:14:54.004060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.701 [2024-11-21 03:14:54.025033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.701 [2024-11-21 03:14:54.025112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.268 03:14:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.268 03:14:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:07.268 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71700 00:06:07.268 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.268 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.525 [ 00:06:07.525 "bdev_malloc_delete", 00:06:07.525 "bdev_malloc_create", 00:06:07.525 "bdev_null_resize", 00:06:07.525 "bdev_null_delete", 00:06:07.525 "bdev_null_create", 00:06:07.525 "bdev_nvme_cuse_unregister", 00:06:07.525 "bdev_nvme_cuse_register", 00:06:07.525 "bdev_opal_new_user", 00:06:07.525 "bdev_opal_set_lock_state", 00:06:07.525 "bdev_opal_delete", 00:06:07.525 "bdev_opal_get_info", 00:06:07.525 "bdev_opal_create", 00:06:07.525 "bdev_nvme_opal_revert", 00:06:07.525 "bdev_nvme_opal_init", 00:06:07.525 "bdev_nvme_send_cmd", 00:06:07.525 "bdev_nvme_set_keys", 00:06:07.525 "bdev_nvme_get_path_iostat", 00:06:07.525 "bdev_nvme_get_mdns_discovery_info", 00:06:07.525 "bdev_nvme_stop_mdns_discovery", 00:06:07.525 "bdev_nvme_start_mdns_discovery", 00:06:07.525 "bdev_nvme_set_multipath_policy", 00:06:07.525 "bdev_nvme_set_preferred_path", 00:06:07.525 "bdev_nvme_get_io_paths", 00:06:07.525 "bdev_nvme_remove_error_injection", 00:06:07.525 "bdev_nvme_add_error_injection", 00:06:07.525 "bdev_nvme_get_discovery_info", 00:06:07.525 "bdev_nvme_stop_discovery", 00:06:07.525 "bdev_nvme_start_discovery", 00:06:07.525 "bdev_nvme_get_controller_health_info", 00:06:07.525 "bdev_nvme_disable_controller", 00:06:07.525 "bdev_nvme_enable_controller", 00:06:07.525 "bdev_nvme_reset_controller", 00:06:07.525 "bdev_nvme_get_transport_statistics", 00:06:07.525 "bdev_nvme_apply_firmware", 00:06:07.525 "bdev_nvme_detach_controller", 00:06:07.525 "bdev_nvme_get_controllers", 00:06:07.525 "bdev_nvme_attach_controller", 00:06:07.525 "bdev_nvme_set_hotplug", 00:06:07.525 "bdev_nvme_set_options", 00:06:07.525 "bdev_passthru_delete", 00:06:07.525 "bdev_passthru_create", 00:06:07.525 "bdev_lvol_set_parent_bdev", 00:06:07.525 "bdev_lvol_set_parent", 00:06:07.525 "bdev_lvol_check_shallow_copy", 00:06:07.525 "bdev_lvol_start_shallow_copy", 00:06:07.525 "bdev_lvol_grow_lvstore", 00:06:07.525 "bdev_lvol_get_lvols", 00:06:07.525 "bdev_lvol_get_lvstores", 00:06:07.525 "bdev_lvol_delete", 00:06:07.525 "bdev_lvol_set_read_only", 00:06:07.525 "bdev_lvol_resize", 00:06:07.525 "bdev_lvol_decouple_parent", 00:06:07.525 "bdev_lvol_inflate", 00:06:07.525 "bdev_lvol_rename", 00:06:07.525 "bdev_lvol_clone_bdev", 00:06:07.525 "bdev_lvol_clone", 00:06:07.525 "bdev_lvol_snapshot", 00:06:07.525 "bdev_lvol_create", 00:06:07.525 "bdev_lvol_delete_lvstore", 00:06:07.525 "bdev_lvol_rename_lvstore", 00:06:07.525 "bdev_lvol_create_lvstore", 00:06:07.525 "bdev_raid_set_options", 00:06:07.525 "bdev_raid_remove_base_bdev", 00:06:07.525 "bdev_raid_add_base_bdev", 00:06:07.525 "bdev_raid_delete", 00:06:07.525 "bdev_raid_create", 00:06:07.525 "bdev_raid_get_bdevs", 00:06:07.525 "bdev_error_inject_error", 00:06:07.525 "bdev_error_delete", 00:06:07.525 "bdev_error_create", 00:06:07.525 "bdev_split_delete", 00:06:07.525 "bdev_split_create", 00:06:07.525 "bdev_delay_delete", 00:06:07.525 "bdev_delay_create", 00:06:07.525 "bdev_delay_update_latency", 00:06:07.525 "bdev_zone_block_delete", 00:06:07.525 "bdev_zone_block_create", 00:06:07.525 "blobfs_create", 00:06:07.525 "blobfs_detect", 00:06:07.525 "blobfs_set_cache_size", 00:06:07.525 "bdev_xnvme_delete", 00:06:07.525 "bdev_xnvme_create", 00:06:07.525 "bdev_aio_delete", 00:06:07.525 "bdev_aio_rescan", 00:06:07.525 "bdev_aio_create", 00:06:07.525 "bdev_ftl_set_property", 00:06:07.525 "bdev_ftl_get_properties", 00:06:07.525 "bdev_ftl_get_stats", 00:06:07.525 "bdev_ftl_unmap", 00:06:07.525 "bdev_ftl_unload", 00:06:07.525 "bdev_ftl_delete", 00:06:07.525 "bdev_ftl_load", 00:06:07.525 "bdev_ftl_create", 00:06:07.525 "bdev_virtio_attach_controller", 00:06:07.525 "bdev_virtio_scsi_get_devices", 00:06:07.525 "bdev_virtio_detach_controller", 00:06:07.525 "bdev_virtio_blk_set_hotplug", 00:06:07.525 "bdev_iscsi_delete", 00:06:07.525 "bdev_iscsi_create", 00:06:07.525 "bdev_iscsi_set_options", 00:06:07.525 "accel_error_inject_error", 00:06:07.525 "ioat_scan_accel_module", 00:06:07.525 "dsa_scan_accel_module", 00:06:07.525 "iaa_scan_accel_module", 00:06:07.525 "keyring_file_remove_key", 00:06:07.525 "keyring_file_add_key", 00:06:07.525 "keyring_linux_set_options", 00:06:07.525 "fsdev_aio_delete", 00:06:07.525 "fsdev_aio_create", 00:06:07.525 "iscsi_get_histogram", 00:06:07.525 "iscsi_enable_histogram", 00:06:07.525 "iscsi_set_options", 00:06:07.525 "iscsi_get_auth_groups", 00:06:07.525 "iscsi_auth_group_remove_secret", 00:06:07.525 "iscsi_auth_group_add_secret", 00:06:07.525 "iscsi_delete_auth_group", 00:06:07.525 "iscsi_create_auth_group", 00:06:07.525 "iscsi_set_discovery_auth", 00:06:07.525 "iscsi_get_options", 00:06:07.525 "iscsi_target_node_request_logout", 00:06:07.525 "iscsi_target_node_set_redirect", 00:06:07.525 "iscsi_target_node_set_auth", 00:06:07.525 "iscsi_target_node_add_lun", 00:06:07.525 "iscsi_get_stats", 00:06:07.525 "iscsi_get_connections", 00:06:07.525 "iscsi_portal_group_set_auth", 00:06:07.525 "iscsi_start_portal_group", 00:06:07.525 "iscsi_delete_portal_group", 00:06:07.525 "iscsi_create_portal_group", 00:06:07.525 "iscsi_get_portal_groups", 00:06:07.525 "iscsi_delete_target_node", 00:06:07.525 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.525 "iscsi_target_node_add_pg_ig_maps", 00:06:07.525 "iscsi_create_target_node", 00:06:07.525 "iscsi_get_target_nodes", 00:06:07.525 "iscsi_delete_initiator_group", 00:06:07.525 "iscsi_initiator_group_remove_initiators", 00:06:07.525 "iscsi_initiator_group_add_initiators", 00:06:07.525 "iscsi_create_initiator_group", 00:06:07.525 "iscsi_get_initiator_groups", 00:06:07.525 "nvmf_set_crdt", 00:06:07.525 "nvmf_set_config", 00:06:07.525 "nvmf_set_max_subsystems", 00:06:07.525 "nvmf_stop_mdns_prr", 00:06:07.525 "nvmf_publish_mdns_prr", 00:06:07.525 "nvmf_subsystem_get_listeners", 00:06:07.525 "nvmf_subsystem_get_qpairs", 00:06:07.525 "nvmf_subsystem_get_controllers", 00:06:07.526 "nvmf_get_stats", 00:06:07.526 "nvmf_get_transports", 00:06:07.526 "nvmf_create_transport", 00:06:07.526 "nvmf_get_targets", 00:06:07.526 "nvmf_delete_target", 00:06:07.526 "nvmf_create_target", 00:06:07.526 "nvmf_subsystem_allow_any_host", 00:06:07.526 "nvmf_subsystem_set_keys", 00:06:07.526 "nvmf_subsystem_remove_host", 00:06:07.526 "nvmf_subsystem_add_host", 00:06:07.526 "nvmf_ns_remove_host", 00:06:07.526 "nvmf_ns_add_host", 00:06:07.526 "nvmf_subsystem_remove_ns", 00:06:07.526 "nvmf_subsystem_set_ns_ana_group", 00:06:07.526 "nvmf_subsystem_add_ns", 00:06:07.526 "nvmf_subsystem_listener_set_ana_state", 00:06:07.526 "nvmf_discovery_get_referrals", 00:06:07.526 "nvmf_discovery_remove_referral", 00:06:07.526 "nvmf_discovery_add_referral", 00:06:07.526 "nvmf_subsystem_remove_listener", 00:06:07.526 "nvmf_subsystem_add_listener", 00:06:07.526 "nvmf_delete_subsystem", 00:06:07.526 "nvmf_create_subsystem", 00:06:07.526 "nvmf_get_subsystems", 00:06:07.526 "env_dpdk_get_mem_stats", 00:06:07.526 "nbd_get_disks", 00:06:07.526 "nbd_stop_disk", 00:06:07.526 "nbd_start_disk", 00:06:07.526 "ublk_recover_disk", 00:06:07.526 "ublk_get_disks", 00:06:07.526 "ublk_stop_disk", 00:06:07.526 "ublk_start_disk", 00:06:07.526 "ublk_destroy_target", 00:06:07.526 "ublk_create_target", 00:06:07.526 "virtio_blk_create_transport", 00:06:07.526 "virtio_blk_get_transports", 00:06:07.526 "vhost_controller_set_coalescing", 00:06:07.526 "vhost_get_controllers", 00:06:07.526 "vhost_delete_controller", 00:06:07.526 "vhost_create_blk_controller", 00:06:07.526 "vhost_scsi_controller_remove_target", 00:06:07.526 "vhost_scsi_controller_add_target", 00:06:07.526 "vhost_start_scsi_controller", 00:06:07.526 "vhost_create_scsi_controller", 00:06:07.526 "thread_set_cpumask", 00:06:07.526 "scheduler_set_options", 00:06:07.526 "framework_get_governor", 00:06:07.526 "framework_get_scheduler", 00:06:07.526 "framework_set_scheduler", 00:06:07.526 "framework_get_reactors", 00:06:07.526 "thread_get_io_channels", 00:06:07.526 "thread_get_pollers", 00:06:07.526 "thread_get_stats", 00:06:07.526 "framework_monitor_context_switch", 00:06:07.526 "spdk_kill_instance", 00:06:07.526 "log_enable_timestamps", 00:06:07.526 "log_get_flags", 00:06:07.526 "log_clear_flag", 00:06:07.526 "log_set_flag", 00:06:07.526 "log_get_level", 00:06:07.526 "log_set_level", 00:06:07.526 "log_get_print_level", 00:06:07.526 "log_set_print_level", 00:06:07.526 "framework_enable_cpumask_locks", 00:06:07.526 "framework_disable_cpumask_locks", 00:06:07.526 "framework_wait_init", 00:06:07.526 "framework_start_init", 00:06:07.526 "scsi_get_devices", 00:06:07.526 "bdev_get_histogram", 00:06:07.526 "bdev_enable_histogram", 00:06:07.526 "bdev_set_qos_limit", 00:06:07.526 "bdev_set_qd_sampling_period", 00:06:07.526 "bdev_get_bdevs", 00:06:07.526 "bdev_reset_iostat", 00:06:07.526 "bdev_get_iostat", 00:06:07.526 "bdev_examine", 00:06:07.526 "bdev_wait_for_examine", 00:06:07.526 "bdev_set_options", 00:06:07.526 "accel_get_stats", 00:06:07.526 "accel_set_options", 00:06:07.526 "accel_set_driver", 00:06:07.526 "accel_crypto_key_destroy", 00:06:07.526 "accel_crypto_keys_get", 00:06:07.526 "accel_crypto_key_create", 00:06:07.526 "accel_assign_opc", 00:06:07.526 "accel_get_module_info", 00:06:07.526 "accel_get_opc_assignments", 00:06:07.526 "vmd_rescan", 00:06:07.526 "vmd_remove_device", 00:06:07.526 "vmd_enable", 00:06:07.526 "sock_get_default_impl", 00:06:07.526 "sock_set_default_impl", 00:06:07.526 "sock_impl_set_options", 00:06:07.526 "sock_impl_get_options", 00:06:07.526 "iobuf_get_stats", 00:06:07.526 "iobuf_set_options", 00:06:07.526 "keyring_get_keys", 00:06:07.526 "framework_get_pci_devices", 00:06:07.526 "framework_get_config", 00:06:07.526 "framework_get_subsystems", 00:06:07.526 "fsdev_set_opts", 00:06:07.526 "fsdev_get_opts", 00:06:07.526 "trace_get_info", 00:06:07.526 "trace_get_tpoint_group_mask", 00:06:07.526 "trace_disable_tpoint_group", 00:06:07.526 "trace_enable_tpoint_group", 00:06:07.526 "trace_clear_tpoint_mask", 00:06:07.526 "trace_set_tpoint_mask", 00:06:07.526 "notify_get_notifications", 00:06:07.526 "notify_get_types", 00:06:07.526 "spdk_get_version", 00:06:07.526 "rpc_get_methods" 00:06:07.526 ] 00:06:07.526 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.526 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.526 03:14:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71688 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71688 ']' 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71688 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71688 00:06:07.526 killing process with pid 71688 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71688' 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71688 00:06:07.526 03:14:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71688 00:06:07.783 ************************************ 00:06:07.783 END TEST spdkcli_tcp 00:06:07.783 ************************************ 00:06:07.783 00:06:07.783 real 0m1.591s 00:06:07.783 user 0m2.803s 00:06:07.783 sys 0m0.403s 00:06:07.783 03:14:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.783 03:14:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.783 03:14:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.783 03:14:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.783 03:14:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.783 03:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:07.783 ************************************ 00:06:07.783 START TEST dpdk_mem_utility 00:06:07.783 ************************************ 00:06:07.783 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.783 * Looking for test storage... 00:06:07.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:07.783 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.783 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.783 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.041 03:14:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.041 --rc genhtml_branch_coverage=1 00:06:08.041 --rc genhtml_function_coverage=1 00:06:08.041 --rc genhtml_legend=1 00:06:08.041 --rc geninfo_all_blocks=1 00:06:08.041 --rc geninfo_unexecuted_blocks=1 00:06:08.041 00:06:08.041 ' 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.041 --rc genhtml_branch_coverage=1 00:06:08.041 --rc genhtml_function_coverage=1 00:06:08.041 --rc genhtml_legend=1 00:06:08.041 --rc geninfo_all_blocks=1 00:06:08.041 --rc geninfo_unexecuted_blocks=1 00:06:08.041 00:06:08.041 ' 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.041 --rc genhtml_branch_coverage=1 00:06:08.041 --rc genhtml_function_coverage=1 00:06:08.041 --rc genhtml_legend=1 00:06:08.041 --rc geninfo_all_blocks=1 00:06:08.041 --rc geninfo_unexecuted_blocks=1 00:06:08.041 00:06:08.041 ' 00:06:08.041 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.041 --rc genhtml_branch_coverage=1 00:06:08.041 --rc genhtml_function_coverage=1 00:06:08.041 --rc genhtml_legend=1 00:06:08.041 --rc geninfo_all_blocks=1 00:06:08.041 --rc geninfo_unexecuted_blocks=1 00:06:08.041 00:06:08.041 ' 00:06:08.042 03:14:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.042 03:14:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71783 00:06:08.042 03:14:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71783 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71783 ']' 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.042 03:14:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.042 03:14:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.042 [2024-11-21 03:14:55.462471] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:08.042 [2024-11-21 03:14:55.462739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71783 ] 00:06:08.042 [2024-11-21 03:14:55.595165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.299 [2024-11-21 03:14:55.622947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.299 [2024-11-21 03:14:55.642651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.865 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.865 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:08.865 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.865 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.865 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.865 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.865 { 00:06:08.865 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.865 } 00:06:08.865 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.865 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.865 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:08.865 1 heaps totaling size 810.000000 MiB 00:06:08.865 size: 810.000000 MiB heap id: 0 00:06:08.865 end heaps---------- 00:06:08.865 9 mempools totaling size 595.772034 MiB 00:06:08.865 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.865 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.865 size: 92.545471 MiB name: bdev_io_71783 00:06:08.865 size: 50.003479 MiB name: msgpool_71783 00:06:08.865 size: 36.509338 MiB name: fsdev_io_71783 00:06:08.865 size: 21.763794 MiB name: PDU_Pool 00:06:08.865 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.865 size: 4.133484 MiB name: evtpool_71783 00:06:08.865 size: 0.026123 MiB name: Session_Pool 00:06:08.865 end mempools------- 00:06:08.865 6 memzones totaling size 4.142822 MiB 00:06:08.865 size: 1.000366 MiB name: RG_ring_0_71783 00:06:08.865 size: 1.000366 MiB name: RG_ring_1_71783 00:06:08.865 size: 1.000366 MiB name: RG_ring_4_71783 00:06:08.865 size: 1.000366 MiB name: RG_ring_5_71783 00:06:08.865 size: 0.125366 MiB name: RG_ring_2_71783 00:06:08.865 size: 0.015991 MiB name: RG_ring_3_71783 00:06:08.865 end memzones------- 00:06:08.865 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.865 heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15 00:06:08.865 list of free elements. size: 10.954163 MiB 00:06:08.865 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:08.865 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:08.865 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:08.865 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:08.865 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:08.865 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:08.865 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:08.865 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:08.865 element at address: 0x20001a600000 with size: 0.567871 MiB 00:06:08.865 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:08.865 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:08.865 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:08.865 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:08.865 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:08.865 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:08.865 list of standard malloc elements. size: 199.126953 MiB 00:06:08.865 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:08.865 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:08.866 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:08.866 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:08.866 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.866 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:08.866 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.866 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:08.866 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:08.866 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:08.866 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:08.867 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:08.867 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:08.868 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:08.868 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:08.868 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:08.868 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:08.868 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:08.868 list of memzone associated elements. size: 599.918884 MiB 00:06:08.868 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:08.868 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.868 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:08.868 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.868 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:08.868 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71783_0 00:06:08.868 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:08.868 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71783_0 00:06:08.868 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:08.868 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71783_0 00:06:08.868 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:08.868 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.868 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:08.868 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.868 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:08.868 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71783_0 00:06:08.868 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:08.868 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71783 00:06:08.868 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:08.868 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71783 00:06:08.868 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:08.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.868 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:08.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.868 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:08.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.868 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:08.868 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.868 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:08.868 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71783 00:06:08.868 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:08.868 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71783 00:06:08.868 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:08.868 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71783 00:06:08.868 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:08.868 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71783 00:06:08.868 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:08.868 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71783 00:06:08.868 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:08.868 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71783 00:06:08.868 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:08.868 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.868 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:08.868 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.868 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:08.868 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.868 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:08.868 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71783 00:06:08.868 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:08.868 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71783 00:06:08.868 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:08.868 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.868 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:08.868 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.868 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:08.868 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71783 00:06:08.868 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:08.868 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.868 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:08.868 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71783 00:06:08.868 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:08.868 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71783 00:06:08.868 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:08.868 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71783 00:06:08.868 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:08.868 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.868 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.868 03:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71783 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71783 ']' 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71783 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71783 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.868 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71783' 00:06:09.125 killing process with pid 71783 00:06:09.125 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71783 00:06:09.126 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71783 00:06:09.126 ************************************ 00:06:09.126 END TEST dpdk_mem_utility 00:06:09.126 ************************************ 00:06:09.126 00:06:09.126 real 0m1.431s 00:06:09.126 user 0m1.468s 00:06:09.126 sys 0m0.357s 00:06:09.126 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.126 03:14:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.384 03:14:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.384 03:14:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.384 03:14:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.384 03:14:56 -- common/autotest_common.sh@10 -- # set +x 00:06:09.384 ************************************ 00:06:09.384 START TEST event 00:06:09.384 ************************************ 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.384 * Looking for test storage... 00:06:09.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.384 03:14:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.384 03:14:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.384 03:14:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.384 03:14:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.384 03:14:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.384 03:14:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.384 03:14:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.384 03:14:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.384 03:14:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.384 03:14:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.384 03:14:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.384 03:14:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.384 03:14:56 event -- scripts/common.sh@345 -- # : 1 00:06:09.384 03:14:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.384 03:14:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.384 03:14:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.384 03:14:56 event -- scripts/common.sh@353 -- # local d=1 00:06:09.384 03:14:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.384 03:14:56 event -- scripts/common.sh@355 -- # echo 1 00:06:09.384 03:14:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.384 03:14:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.384 03:14:56 event -- scripts/common.sh@353 -- # local d=2 00:06:09.384 03:14:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.384 03:14:56 event -- scripts/common.sh@355 -- # echo 2 00:06:09.384 03:14:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.384 03:14:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.384 03:14:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.384 03:14:56 event -- scripts/common.sh@368 -- # return 0 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.384 --rc genhtml_branch_coverage=1 00:06:09.384 --rc genhtml_function_coverage=1 00:06:09.384 --rc genhtml_legend=1 00:06:09.384 --rc geninfo_all_blocks=1 00:06:09.384 --rc geninfo_unexecuted_blocks=1 00:06:09.384 00:06:09.384 ' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.384 --rc genhtml_branch_coverage=1 00:06:09.384 --rc genhtml_function_coverage=1 00:06:09.384 --rc genhtml_legend=1 00:06:09.384 --rc geninfo_all_blocks=1 00:06:09.384 --rc geninfo_unexecuted_blocks=1 00:06:09.384 00:06:09.384 ' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.384 --rc genhtml_branch_coverage=1 00:06:09.384 --rc genhtml_function_coverage=1 00:06:09.384 --rc genhtml_legend=1 00:06:09.384 --rc geninfo_all_blocks=1 00:06:09.384 --rc geninfo_unexecuted_blocks=1 00:06:09.384 00:06:09.384 ' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.384 --rc genhtml_branch_coverage=1 00:06:09.384 --rc genhtml_function_coverage=1 00:06:09.384 --rc genhtml_legend=1 00:06:09.384 --rc geninfo_all_blocks=1 00:06:09.384 --rc geninfo_unexecuted_blocks=1 00:06:09.384 00:06:09.384 ' 00:06:09.384 03:14:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.384 03:14:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.384 03:14:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:09.384 03:14:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.384 03:14:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.384 ************************************ 00:06:09.384 START TEST event_perf 00:06:09.384 ************************************ 00:06:09.384 03:14:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.384 Running I/O for 1 seconds...[2024-11-21 03:14:56.922153] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:09.384 [2024-11-21 03:14:56.922360] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71863 ] 00:06:09.642 [2024-11-21 03:14:57.053416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.642 [2024-11-21 03:14:57.083549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.642 [2024-11-21 03:14:57.105471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.642 [2024-11-21 03:14:57.105752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.642 Running I/O for 1 seconds...[2024-11-21 03:14:57.105985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.643 [2024-11-21 03:14:57.106024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.576 00:06:10.576 lcore 0: 196587 00:06:10.577 lcore 1: 196587 00:06:10.577 lcore 2: 196589 00:06:10.577 lcore 3: 196586 00:06:10.835 done. 00:06:10.835 00:06:10.835 real 0m1.260s 00:06:10.835 user 0m4.063s 00:06:10.835 sys 0m0.079s 00:06:10.835 ************************************ 00:06:10.835 END TEST event_perf 00:06:10.835 ************************************ 00:06:10.835 03:14:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.835 03:14:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.835 03:14:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.835 03:14:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:10.835 03:14:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.835 03:14:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.835 ************************************ 00:06:10.835 START TEST event_reactor 00:06:10.835 ************************************ 00:06:10.835 03:14:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.835 [2024-11-21 03:14:58.242288] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:10.835 [2024-11-21 03:14:58.242414] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71897 ] 00:06:10.835 [2024-11-21 03:14:58.371387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.094 [2024-11-21 03:14:58.397368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.094 [2024-11-21 03:14:58.416141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.027 test_start 00:06:12.027 oneshot 00:06:12.027 tick 100 00:06:12.027 tick 100 00:06:12.027 tick 250 00:06:12.027 tick 100 00:06:12.027 tick 100 00:06:12.027 tick 250 00:06:12.027 tick 100 00:06:12.027 tick 500 00:06:12.027 tick 100 00:06:12.027 tick 100 00:06:12.027 tick 250 00:06:12.027 tick 100 00:06:12.027 tick 100 00:06:12.027 test_end 00:06:12.027 00:06:12.027 real 0m1.250s 00:06:12.027 user 0m1.081s 00:06:12.027 sys 0m0.062s 00:06:12.027 03:14:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.027 03:14:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.027 ************************************ 00:06:12.027 END TEST event_reactor 00:06:12.027 ************************************ 00:06:12.027 03:14:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.027 03:14:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:12.027 03:14:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.027 03:14:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.027 ************************************ 00:06:12.027 START TEST event_reactor_perf 00:06:12.027 ************************************ 00:06:12.027 03:14:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.027 [2024-11-21 03:14:59.542561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:12.027 [2024-11-21 03:14:59.542785] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71934 ] 00:06:12.286 [2024-11-21 03:14:59.671511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.286 [2024-11-21 03:14:59.702604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.286 [2024-11-21 03:14:59.721616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.221 test_start 00:06:13.221 test_end 00:06:13.221 Performance: 315776 events per second 00:06:13.221 00:06:13.221 real 0m1.256s 00:06:13.221 user 0m1.074s 00:06:13.221 sys 0m0.073s 00:06:13.221 03:15:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.221 03:15:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.221 ************************************ 00:06:13.221 END TEST event_reactor_perf 00:06:13.221 ************************************ 00:06:13.516 03:15:00 event -- event/event.sh@49 -- # uname -s 00:06:13.516 03:15:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:13.516 03:15:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.516 03:15:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.516 03:15:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.516 03:15:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.516 ************************************ 00:06:13.516 START TEST event_scheduler 00:06:13.516 ************************************ 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.516 * Looking for test storage... 00:06:13.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.516 03:15:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.516 --rc genhtml_branch_coverage=1 00:06:13.516 --rc genhtml_function_coverage=1 00:06:13.516 --rc genhtml_legend=1 00:06:13.516 --rc geninfo_all_blocks=1 00:06:13.516 --rc geninfo_unexecuted_blocks=1 00:06:13.516 00:06:13.516 ' 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.516 --rc genhtml_branch_coverage=1 00:06:13.516 --rc genhtml_function_coverage=1 00:06:13.516 --rc genhtml_legend=1 00:06:13.516 --rc geninfo_all_blocks=1 00:06:13.516 --rc geninfo_unexecuted_blocks=1 00:06:13.516 00:06:13.516 ' 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.516 --rc genhtml_branch_coverage=1 00:06:13.516 --rc genhtml_function_coverage=1 00:06:13.516 --rc genhtml_legend=1 00:06:13.516 --rc geninfo_all_blocks=1 00:06:13.516 --rc geninfo_unexecuted_blocks=1 00:06:13.516 00:06:13.516 ' 00:06:13.516 03:15:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.516 --rc genhtml_branch_coverage=1 00:06:13.516 --rc genhtml_function_coverage=1 00:06:13.516 --rc genhtml_legend=1 00:06:13.516 --rc geninfo_all_blocks=1 00:06:13.516 --rc geninfo_unexecuted_blocks=1 00:06:13.516 00:06:13.516 ' 00:06:13.516 03:15:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:13.516 03:15:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72004 00:06:13.516 03:15:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.516 03:15:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:13.517 03:15:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72004 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72004 ']' 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.517 03:15:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.517 [2024-11-21 03:15:01.057244] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:13.517 [2024-11-21 03:15:01.057368] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72004 ] 00:06:13.775 [2024-11-21 03:15:01.190326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.775 [2024-11-21 03:15:01.212991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.775 [2024-11-21 03:15:01.239642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.775 [2024-11-21 03:15:01.240042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.775 [2024-11-21 03:15:01.240479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.775 [2024-11-21 03:15:01.240518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.342 03:15:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.342 03:15:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:14.342 03:15:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:14.342 03:15:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.342 03:15:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:14.603 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:14.603 POWER: intel_pstate driver is not supported 00:06:14.603 POWER: cppc_cpufreq driver is not supported 00:06:14.603 POWER: amd-pstate driver is not supported 00:06:14.603 POWER: acpi-cpufreq driver is not supported 00:06:14.603 POWER: Unable to set Power Management Environment for lcore 0 00:06:14.603 [2024-11-21 03:15:01.905893] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:14.603 [2024-11-21 03:15:01.905931] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:14.603 [2024-11-21 03:15:01.905944] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:14.603 [2024-11-21 03:15:01.905959] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:14.603 [2024-11-21 03:15:01.905980] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:14.603 [2024-11-21 03:15:01.905988] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 [2024-11-21 03:15:01.964283] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.603 03:15:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 ************************************ 00:06:14.603 START TEST scheduler_create_thread 00:06:14.603 ************************************ 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 2 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 3 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 4 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 5 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 6 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 7 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.603 8 00:06:14.603 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.604 9 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.604 10 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.604 03:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.979 03:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.979 03:15:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.979 03:15:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.979 03:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.979 03:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.357 ************************************ 00:06:17.357 END TEST scheduler_create_thread 00:06:17.357 ************************************ 00:06:17.357 03:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.357 00:06:17.357 real 0m2.613s 00:06:17.357 user 0m0.014s 00:06:17.357 sys 0m0.006s 00:06:17.357 03:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.357 03:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.357 03:15:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.357 03:15:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72004 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72004 ']' 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72004 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72004 00:06:17.357 killing process with pid 72004 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72004' 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72004 00:06:17.357 03:15:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72004 00:06:17.615 [2024-11-21 03:15:05.071557] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.872 ************************************ 00:06:17.872 END TEST event_scheduler 00:06:17.872 ************************************ 00:06:17.872 00:06:17.872 real 0m4.380s 00:06:17.872 user 0m8.065s 00:06:17.872 sys 0m0.331s 00:06:17.872 03:15:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.872 03:15:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.872 03:15:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.872 03:15:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.872 03:15:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.872 03:15:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.872 03:15:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.872 ************************************ 00:06:17.872 START TEST app_repeat 00:06:17.872 ************************************ 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.872 Process app_repeat pid: 72099 00:06:17.872 spdk_app_start Round 0 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72099 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72099' 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.872 03:15:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72099 /var/tmp/spdk-nbd.sock 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72099 ']' 00:06:17.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.872 03:15:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.872 [2024-11-21 03:15:05.324724] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:17.872 [2024-11-21 03:15:05.324840] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72099 ] 00:06:18.129 [2024-11-21 03:15:05.456223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.129 [2024-11-21 03:15:05.484293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.129 [2024-11-21 03:15:05.505646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.129 [2024-11-21 03:15:05.505721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.695 03:15:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.695 03:15:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.695 03:15:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.953 Malloc0 00:06:18.953 03:15:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.213 Malloc1 00:06:19.213 03:15:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.213 03:15:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.214 03:15:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.473 /dev/nbd0 00:06:19.473 03:15:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.473 03:15:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.473 1+0 records in 00:06:19.473 1+0 records out 00:06:19.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448253 s, 9.1 MB/s 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.473 03:15:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.473 03:15:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.473 03:15:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.474 03:15:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.732 /dev/nbd1 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.732 1+0 records in 00:06:19.732 1+0 records out 00:06:19.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269481 s, 15.2 MB/s 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.732 03:15:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.732 03:15:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.991 { 00:06:19.991 "nbd_device": "/dev/nbd0", 00:06:19.991 "bdev_name": "Malloc0" 00:06:19.991 }, 00:06:19.991 { 00:06:19.991 "nbd_device": "/dev/nbd1", 00:06:19.991 "bdev_name": "Malloc1" 00:06:19.991 } 00:06:19.991 ]' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.991 { 00:06:19.991 "nbd_device": "/dev/nbd0", 00:06:19.991 "bdev_name": "Malloc0" 00:06:19.991 }, 00:06:19.991 { 00:06:19.991 "nbd_device": "/dev/nbd1", 00:06:19.991 "bdev_name": "Malloc1" 00:06:19.991 } 00:06:19.991 ]' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.991 /dev/nbd1' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.991 /dev/nbd1' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.991 256+0 records in 00:06:19.991 256+0 records out 00:06:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712736 s, 147 MB/s 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.991 256+0 records in 00:06:19.991 256+0 records out 00:06:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170034 s, 61.7 MB/s 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.991 256+0 records in 00:06:19.991 256+0 records out 00:06:19.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212738 s, 49.3 MB/s 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.991 03:15:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.249 03:15:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.507 03:15:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.507 03:15:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.507 03:15:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.764 03:15:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.022 [2024-11-21 03:15:08.348795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.022 [2024-11-21 03:15:08.368605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.022 [2024-11-21 03:15:08.368720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.022 [2024-11-21 03:15:08.400638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.022 [2024-11-21 03:15:08.400706] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.311 spdk_app_start Round 1 00:06:24.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.311 03:15:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.311 03:15:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.311 03:15:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72099 /var/tmp/spdk-nbd.sock 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72099 ']' 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.311 03:15:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:24.311 03:15:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.311 Malloc0 00:06:24.311 03:15:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.569 Malloc1 00:06:24.569 03:15:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.569 03:15:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.569 /dev/nbd0 00:06:24.569 03:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.569 03:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.569 03:15:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.827 1+0 records in 00:06:24.827 1+0 records out 00:06:24.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615259 s, 6.7 MB/s 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.827 /dev/nbd1 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.827 1+0 records in 00:06:24.827 1+0 records out 00:06:24.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278485 s, 14.7 MB/s 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.827 03:15:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.827 03:15:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.084 { 00:06:25.084 "nbd_device": "/dev/nbd0", 00:06:25.084 "bdev_name": "Malloc0" 00:06:25.084 }, 00:06:25.084 { 00:06:25.084 "nbd_device": "/dev/nbd1", 00:06:25.084 "bdev_name": "Malloc1" 00:06:25.084 } 00:06:25.084 ]' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.084 { 00:06:25.084 "nbd_device": "/dev/nbd0", 00:06:25.084 "bdev_name": "Malloc0" 00:06:25.084 }, 00:06:25.084 { 00:06:25.084 "nbd_device": "/dev/nbd1", 00:06:25.084 "bdev_name": "Malloc1" 00:06:25.084 } 00:06:25.084 ]' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.084 /dev/nbd1' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.084 /dev/nbd1' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.084 256+0 records in 00:06:25.084 256+0 records out 00:06:25.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609759 s, 172 MB/s 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.084 256+0 records in 00:06:25.084 256+0 records out 00:06:25.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139587 s, 75.1 MB/s 00:06:25.084 03:15:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.341 256+0 records in 00:06:25.341 256+0 records out 00:06:25.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162894 s, 64.4 MB/s 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.341 03:15:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.342 03:15:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.599 03:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.859 03:15:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.859 03:15:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.119 03:15:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.119 [2024-11-21 03:15:13.607630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.119 [2024-11-21 03:15:13.623670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.119 [2024-11-21 03:15:13.623678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.119 [2024-11-21 03:15:13.653331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.119 [2024-11-21 03:15:13.653379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.396 spdk_app_start Round 2 00:06:29.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.396 03:15:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.396 03:15:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.396 03:15:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72099 /var/tmp/spdk-nbd.sock 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72099 ']' 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.396 03:15:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.396 03:15:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.396 Malloc0 00:06:29.396 03:15:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.654 Malloc1 00:06:29.654 03:15:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.654 03:15:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.925 /dev/nbd0 00:06:29.925 03:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.925 03:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.925 03:15:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.925 03:15:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.925 03:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.925 03:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.926 1+0 records in 00:06:29.926 1+0 records out 00:06:29.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272153 s, 15.1 MB/s 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.926 03:15:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.926 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.926 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.926 03:15:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.185 /dev/nbd1 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.185 1+0 records in 00:06:30.185 1+0 records out 00:06:30.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217142 s, 18.9 MB/s 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.185 03:15:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.185 03:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.442 { 00:06:30.442 "nbd_device": "/dev/nbd0", 00:06:30.442 "bdev_name": "Malloc0" 00:06:30.442 }, 00:06:30.442 { 00:06:30.442 "nbd_device": "/dev/nbd1", 00:06:30.442 "bdev_name": "Malloc1" 00:06:30.442 } 00:06:30.442 ]' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.442 { 00:06:30.442 "nbd_device": "/dev/nbd0", 00:06:30.442 "bdev_name": "Malloc0" 00:06:30.442 }, 00:06:30.442 { 00:06:30.442 "nbd_device": "/dev/nbd1", 00:06:30.442 "bdev_name": "Malloc1" 00:06:30.442 } 00:06:30.442 ]' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.442 /dev/nbd1' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.442 /dev/nbd1' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.442 03:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.443 256+0 records in 00:06:30.443 256+0 records out 00:06:30.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00765594 s, 137 MB/s 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.443 256+0 records in 00:06:30.443 256+0 records out 00:06:30.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138792 s, 75.5 MB/s 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.443 256+0 records in 00:06:30.443 256+0 records out 00:06:30.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212118 s, 49.4 MB/s 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.443 03:15:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.700 03:15:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.957 03:15:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.215 03:15:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.215 03:15:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.473 03:15:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.473 [2024-11-21 03:15:18.880532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.473 [2024-11-21 03:15:18.897738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.473 [2024-11-21 03:15:18.897744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.473 [2024-11-21 03:15:18.928371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.473 [2024-11-21 03:15:18.928421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.837 03:15:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72099 /var/tmp/spdk-nbd.sock 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72099 ']' 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.837 03:15:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.837 03:15:22 event.app_repeat -- event/event.sh@39 -- # killprocess 72099 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72099 ']' 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72099 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72099 00:06:34.837 killing process with pid 72099 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72099' 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72099 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72099 00:06:34.837 spdk_app_start is called in Round 0. 00:06:34.837 Shutdown signal received, stop current app iteration 00:06:34.837 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:06:34.837 spdk_app_start is called in Round 1. 00:06:34.837 Shutdown signal received, stop current app iteration 00:06:34.837 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:06:34.837 spdk_app_start is called in Round 2. 00:06:34.837 Shutdown signal received, stop current app iteration 00:06:34.837 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:06:34.837 spdk_app_start is called in Round 3. 00:06:34.837 Shutdown signal received, stop current app iteration 00:06:34.837 ************************************ 00:06:34.837 END TEST app_repeat 00:06:34.837 ************************************ 00:06:34.837 03:15:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:34.837 03:15:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:34.837 00:06:34.837 real 0m16.880s 00:06:34.837 user 0m37.757s 00:06:34.837 sys 0m2.085s 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.837 03:15:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.837 03:15:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:34.837 03:15:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.837 03:15:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.837 03:15:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.837 03:15:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.837 ************************************ 00:06:34.837 START TEST cpu_locks 00:06:34.837 ************************************ 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.837 * Looking for test storage... 00:06:34.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.837 03:15:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.837 03:15:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.837 --rc genhtml_branch_coverage=1 00:06:34.838 --rc genhtml_function_coverage=1 00:06:34.838 --rc genhtml_legend=1 00:06:34.838 --rc geninfo_all_blocks=1 00:06:34.838 --rc geninfo_unexecuted_blocks=1 00:06:34.838 00:06:34.838 ' 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.838 --rc genhtml_branch_coverage=1 00:06:34.838 --rc genhtml_function_coverage=1 00:06:34.838 --rc genhtml_legend=1 00:06:34.838 --rc geninfo_all_blocks=1 00:06:34.838 --rc geninfo_unexecuted_blocks=1 00:06:34.838 00:06:34.838 ' 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.838 --rc genhtml_branch_coverage=1 00:06:34.838 --rc genhtml_function_coverage=1 00:06:34.838 --rc genhtml_legend=1 00:06:34.838 --rc geninfo_all_blocks=1 00:06:34.838 --rc geninfo_unexecuted_blocks=1 00:06:34.838 00:06:34.838 ' 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.838 --rc genhtml_branch_coverage=1 00:06:34.838 --rc genhtml_function_coverage=1 00:06:34.838 --rc genhtml_legend=1 00:06:34.838 --rc geninfo_all_blocks=1 00:06:34.838 --rc geninfo_unexecuted_blocks=1 00:06:34.838 00:06:34.838 ' 00:06:34.838 03:15:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.838 03:15:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.838 03:15:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.838 03:15:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.838 03:15:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 ************************************ 00:06:34.838 START TEST default_locks 00:06:34.838 ************************************ 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72519 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72519 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72519 ']' 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.838 03:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.095 [2024-11-21 03:15:22.465012] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:35.095 [2024-11-21 03:15:22.465241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72519 ] 00:06:35.095 [2024-11-21 03:15:22.597710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.095 [2024-11-21 03:15:22.624807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.095 [2024-11-21 03:15:22.644330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72519 ']' 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72519 00:06:36.029 killing process with pid 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72519' 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72519 00:06:36.029 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72519 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72519 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72519 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.288 ERROR: process (pid: 72519) is no longer running 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72519 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72519 ']' 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.288 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72519) - No such process 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.288 ************************************ 00:06:36.288 END TEST default_locks 00:06:36.288 00:06:36.288 real 0m1.398s 00:06:36.288 user 0m1.417s 00:06:36.288 sys 0m0.420s 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.288 03:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.288 ************************************ 00:06:36.288 03:15:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.288 03:15:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.288 03:15:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.288 03:15:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.289 ************************************ 00:06:36.289 START TEST default_locks_via_rpc 00:06:36.289 ************************************ 00:06:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72566 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72566 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72566 ']' 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.289 03:15:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.547 [2024-11-21 03:15:23.897854] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:36.547 [2024-11-21 03:15:23.898099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72566 ] 00:06:36.547 [2024-11-21 03:15:24.024388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.547 [2024-11-21 03:15:24.055274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.547 [2024-11-21 03:15:24.074794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72566 ']' 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72566 00:06:37.483 killing process with pid 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72566' 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72566 00:06:37.483 03:15:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72566 00:06:37.741 ************************************ 00:06:37.741 END TEST default_locks_via_rpc 00:06:37.741 ************************************ 00:06:37.741 00:06:37.741 real 0m1.370s 00:06:37.741 user 0m1.426s 00:06:37.741 sys 0m0.376s 00:06:37.741 03:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.741 03:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.741 03:15:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.741 03:15:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.741 03:15:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.741 03:15:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.741 ************************************ 00:06:37.741 START TEST non_locking_app_on_locked_coremask 00:06:37.741 ************************************ 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:37.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72613 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72613 /var/tmp/spdk.sock 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72613 ']' 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.741 03:15:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.999 [2024-11-21 03:15:25.344864] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:37.999 [2024-11-21 03:15:25.344993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72613 ] 00:06:37.999 [2024-11-21 03:15:25.477840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.999 [2024-11-21 03:15:25.508981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.999 [2024-11-21 03:15:25.528497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72629 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72629 /var/tmp/spdk2.sock 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72629 ']' 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.932 03:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.932 [2024-11-21 03:15:26.252749] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:38.933 [2024-11-21 03:15:26.253054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:06:38.933 [2024-11-21 03:15:26.388634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.933 [2024-11-21 03:15:26.435139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.933 [2024-11-21 03:15:26.435184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.933 [2024-11-21 03:15:26.470387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72613 ']' 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.866 killing process with pid 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72613' 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72613 00:06:39.866 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72613 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72629 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72629 ']' 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72629 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72629 00:06:40.433 killing process with pid 72629 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72629' 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72629 00:06:40.433 03:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72629 00:06:40.691 ************************************ 00:06:40.691 END TEST non_locking_app_on_locked_coremask 00:06:40.691 ************************************ 00:06:40.691 00:06:40.691 real 0m2.879s 00:06:40.691 user 0m3.183s 00:06:40.691 sys 0m0.792s 00:06:40.691 03:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.691 03:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.691 03:15:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:40.691 03:15:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.691 03:15:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.691 03:15:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.691 ************************************ 00:06:40.691 START TEST locking_app_on_unlocked_coremask 00:06:40.691 ************************************ 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72687 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72687 /var/tmp/spdk.sock 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72687 ']' 00:06:40.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.691 03:15:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.949 [2024-11-21 03:15:28.260882] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:40.949 [2024-11-21 03:15:28.261011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72687 ] 00:06:40.949 [2024-11-21 03:15:28.394180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.949 [2024-11-21 03:15:28.425626] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.949 [2024-11-21 03:15:28.425664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.949 [2024-11-21 03:15:28.444941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72703 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72703 /var/tmp/spdk2.sock 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72703 ']' 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.890 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.891 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.891 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.891 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.891 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.891 [2024-11-21 03:15:29.169956] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:41.891 [2024-11-21 03:15:29.170220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72703 ] 00:06:41.891 [2024-11-21 03:15:29.301269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.891 [2024-11-21 03:15:29.343868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.891 [2024-11-21 03:15:29.383121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.456 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.456 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.456 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72703 00:06:42.456 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.456 03:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72703 00:06:43.021 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72687 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72687 ']' 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72687 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72687 00:06:43.022 killing process with pid 72687 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72687' 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72687 00:06:43.022 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72687 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72703 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72703 ']' 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72703 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72703 00:06:43.279 killing process with pid 72703 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72703' 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72703 00:06:43.279 03:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72703 00:06:43.537 ************************************ 00:06:43.537 END TEST locking_app_on_unlocked_coremask 00:06:43.537 ************************************ 00:06:43.537 00:06:43.537 real 0m2.902s 00:06:43.537 user 0m3.207s 00:06:43.537 sys 0m0.773s 00:06:43.537 03:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.537 03:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 03:15:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:43.796 03:15:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.796 03:15:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.796 03:15:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 ************************************ 00:06:43.796 START TEST locking_app_on_locked_coremask 00:06:43.796 ************************************ 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:43.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72761 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72761 /var/tmp/spdk.sock 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72761 ']' 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.796 03:15:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 [2024-11-21 03:15:31.226570] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:43.796 [2024-11-21 03:15:31.226694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72761 ] 00:06:44.054 [2024-11-21 03:15:31.359565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.054 [2024-11-21 03:15:31.390195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.054 [2024-11-21 03:15:31.409696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72777 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72777 /var/tmp/spdk2.sock 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72777 /var/tmp/spdk2.sock 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72777 /var/tmp/spdk2.sock 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72777 ']' 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.619 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.619 [2024-11-21 03:15:32.127822] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:44.619 [2024-11-21 03:15:32.128145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:06:44.876 [2024-11-21 03:15:32.261682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.876 [2024-11-21 03:15:32.304299] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72761 has claimed it. 00:06:44.876 [2024-11-21 03:15:32.304357] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.441 ERROR: process (pid: 72777) is no longer running 00:06:45.441 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72777) - No such process 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72761 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72761 00:06:45.441 03:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72761 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72761 ']' 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72761 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72761 00:06:45.700 killing process with pid 72761 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72761' 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72761 00:06:45.700 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72761 00:06:45.958 ************************************ 00:06:45.958 END TEST locking_app_on_locked_coremask 00:06:45.958 ************************************ 00:06:45.958 00:06:45.958 real 0m2.141s 00:06:45.958 user 0m2.374s 00:06:45.958 sys 0m0.545s 00:06:45.958 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.958 03:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.958 03:15:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.958 03:15:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.958 03:15:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.958 03:15:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.958 ************************************ 00:06:45.958 START TEST locking_overlapped_coremask 00:06:45.958 ************************************ 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72819 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72819 /var/tmp/spdk.sock 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72819 ']' 00:06:45.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.958 03:15:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.958 [2024-11-21 03:15:33.427200] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:45.958 [2024-11-21 03:15:33.427319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72819 ] 00:06:46.215 [2024-11-21 03:15:33.560921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.215 [2024-11-21 03:15:33.588308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.215 [2024-11-21 03:15:33.609696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.215 [2024-11-21 03:15:33.609926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.215 [2024-11-21 03:15:33.610013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72837 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72837 /var/tmp/spdk2.sock 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72837 /var/tmp/spdk2.sock 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72837 /var/tmp/spdk2.sock 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72837 ']' 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.780 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 [2024-11-21 03:15:34.330893] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:46.780 [2024-11-21 03:15:34.331175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:06:47.038 [2024-11-21 03:15:34.466349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.038 [2024-11-21 03:15:34.507822] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72819 has claimed it. 00:06:47.038 [2024-11-21 03:15:34.507879] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.604 ERROR: process (pid: 72837) is no longer running 00:06:47.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72837) - No such process 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72819 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72819 ']' 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72819 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72819 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72819' 00:06:47.604 killing process with pid 72819 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72819 00:06:47.604 03:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72819 00:06:47.863 00:06:47.863 real 0m1.912s 00:06:47.863 user 0m5.247s 00:06:47.863 sys 0m0.422s 00:06:47.863 ************************************ 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.863 END TEST locking_overlapped_coremask 00:06:47.863 ************************************ 00:06:47.863 03:15:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.863 03:15:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.863 03:15:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.863 03:15:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.863 ************************************ 00:06:47.863 START TEST locking_overlapped_coremask_via_rpc 00:06:47.863 ************************************ 00:06:47.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72879 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72879 /var/tmp/spdk.sock 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72879 ']' 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.863 03:15:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.863 [2024-11-21 03:15:35.400568] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:47.863 [2024-11-21 03:15:35.400687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72879 ] 00:06:48.121 [2024-11-21 03:15:35.534310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.121 [2024-11-21 03:15:35.558419] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.121 [2024-11-21 03:15:35.558454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.122 [2024-11-21 03:15:35.583699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.122 [2024-11-21 03:15:35.584098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.122 [2024-11-21 03:15:35.584100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72897 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72897 /var/tmp/spdk2.sock 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72897 ']' 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.688 03:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.947 [2024-11-21 03:15:36.300413] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:48.947 [2024-11-21 03:15:36.300692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72897 ] 00:06:48.947 [2024-11-21 03:15:36.436756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.947 [2024-11-21 03:15:36.482499] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.947 [2024-11-21 03:15:36.482543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.205 [2024-11-21 03:15:36.532975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.205 [2024-11-21 03:15:36.536117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.205 [2024-11-21 03:15:36.536193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.770 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.770 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.770 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.771 [2024-11-21 03:15:37.175051] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72879 has claimed it. 00:06:49.771 request: 00:06:49.771 { 00:06:49.771 "method": "framework_enable_cpumask_locks", 00:06:49.771 "req_id": 1 00:06:49.771 } 00:06:49.771 Got JSON-RPC error response 00:06:49.771 response: 00:06:49.771 { 00:06:49.771 "code": -32603, 00:06:49.771 "message": "Failed to claim CPU core: 2" 00:06:49.771 } 00:06:49.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72879 /var/tmp/spdk.sock 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72879 ']' 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.771 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72897 /var/tmp/spdk2.sock 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72897 ']' 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.029 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.288 ************************************ 00:06:50.288 END TEST locking_overlapped_coremask_via_rpc 00:06:50.288 ************************************ 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.288 00:06:50.288 real 0m2.278s 00:06:50.288 user 0m1.086s 00:06:50.288 sys 0m0.116s 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.288 03:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.288 03:15:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.288 03:15:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72879 ]] 00:06:50.288 03:15:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72879 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72879 ']' 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72879 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72879 00:06:50.288 killing process with pid 72879 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72879' 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72879 00:06:50.288 03:15:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72879 00:06:50.546 03:15:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72897 ]] 00:06:50.546 03:15:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72897 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72897 ']' 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72897 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72897 00:06:50.546 killing process with pid 72897 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72897' 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72897 00:06:50.546 03:15:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72897 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.804 Process with pid 72879 is not found 00:06:50.804 Process with pid 72897 is not found 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72879 ]] 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72879 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72879 ']' 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72879 00:06:50.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72879) - No such process 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72879 is not found' 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72897 ]] 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72897 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72897 ']' 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72897 00:06:50.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72897) - No such process 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72897 is not found' 00:06:50.804 03:15:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.804 ************************************ 00:06:50.804 END TEST cpu_locks 00:06:50.804 ************************************ 00:06:50.804 00:06:50.804 real 0m16.093s 00:06:50.804 user 0m28.602s 00:06:50.804 sys 0m4.267s 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.804 03:15:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 00:06:50.804 real 0m41.622s 00:06:50.804 user 1m20.819s 00:06:50.804 sys 0m7.123s 00:06:50.804 03:15:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.804 03:15:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 ************************************ 00:06:50.804 END TEST event 00:06:50.804 ************************************ 00:06:51.062 03:15:38 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.062 03:15:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.062 03:15:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.062 03:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.062 ************************************ 00:06:51.062 START TEST thread 00:06:51.062 ************************************ 00:06:51.062 03:15:38 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.062 * Looking for test storage... 00:06:51.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.062 03:15:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.062 03:15:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.062 03:15:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.062 03:15:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.062 03:15:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.062 03:15:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.062 03:15:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.062 03:15:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.062 03:15:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.062 03:15:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.062 03:15:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.062 03:15:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.062 03:15:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.062 03:15:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.062 03:15:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.062 03:15:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.062 03:15:38 thread -- scripts/common.sh@345 -- # : 1 00:06:51.062 03:15:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.062 03:15:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.062 03:15:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.062 03:15:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.062 03:15:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.062 03:15:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.062 03:15:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.062 03:15:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.062 03:15:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.063 03:15:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.063 03:15:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.063 03:15:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.063 03:15:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.063 03:15:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.063 03:15:38 thread -- scripts/common.sh@368 -- # return 0 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.063 --rc genhtml_branch_coverage=1 00:06:51.063 --rc genhtml_function_coverage=1 00:06:51.063 --rc genhtml_legend=1 00:06:51.063 --rc geninfo_all_blocks=1 00:06:51.063 --rc geninfo_unexecuted_blocks=1 00:06:51.063 00:06:51.063 ' 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.063 --rc genhtml_branch_coverage=1 00:06:51.063 --rc genhtml_function_coverage=1 00:06:51.063 --rc genhtml_legend=1 00:06:51.063 --rc geninfo_all_blocks=1 00:06:51.063 --rc geninfo_unexecuted_blocks=1 00:06:51.063 00:06:51.063 ' 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.063 --rc genhtml_branch_coverage=1 00:06:51.063 --rc genhtml_function_coverage=1 00:06:51.063 --rc genhtml_legend=1 00:06:51.063 --rc geninfo_all_blocks=1 00:06:51.063 --rc geninfo_unexecuted_blocks=1 00:06:51.063 00:06:51.063 ' 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.063 --rc genhtml_branch_coverage=1 00:06:51.063 --rc genhtml_function_coverage=1 00:06:51.063 --rc genhtml_legend=1 00:06:51.063 --rc geninfo_all_blocks=1 00:06:51.063 --rc geninfo_unexecuted_blocks=1 00:06:51.063 00:06:51.063 ' 00:06:51.063 03:15:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.063 03:15:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.063 ************************************ 00:06:51.063 START TEST thread_poller_perf 00:06:51.063 ************************************ 00:06:51.063 03:15:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.063 [2024-11-21 03:15:38.596088] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:51.063 [2024-11-21 03:15:38.596217] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73024 ] 00:06:51.321 [2024-11-21 03:15:38.726716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.321 [2024-11-21 03:15:38.751031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.321 [2024-11-21 03:15:38.773990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.321 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.262 [2024-11-21T03:15:39.827Z] ====================================== 00:06:52.262 [2024-11-21T03:15:39.827Z] busy:2611110722 (cyc) 00:06:52.262 [2024-11-21T03:15:39.827Z] total_run_count: 412000 00:06:52.262 [2024-11-21T03:15:39.827Z] tsc_hz: 2600000000 (cyc) 00:06:52.262 [2024-11-21T03:15:39.827Z] ====================================== 00:06:52.262 [2024-11-21T03:15:39.827Z] poller_cost: 6337 (cyc), 2437 (nsec) 00:06:52.520 00:06:52.520 real 0m1.266s 00:06:52.520 user 0m1.078s 00:06:52.520 sys 0m0.082s 00:06:52.520 03:15:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.520 ************************************ 00:06:52.520 END TEST thread_poller_perf 00:06:52.520 03:15:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.520 ************************************ 00:06:52.520 03:15:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.520 03:15:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.520 03:15:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.520 03:15:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.520 ************************************ 00:06:52.520 START TEST thread_poller_perf 00:06:52.520 ************************************ 00:06:52.520 03:15:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.520 [2024-11-21 03:15:39.931655] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:52.520 [2024-11-21 03:15:39.933054] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73055 ] 00:06:52.521 [2024-11-21 03:15:40.068988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.779 [2024-11-21 03:15:40.094720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.779 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:52.779 [2024-11-21 03:15:40.118048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.713 [2024-11-21T03:15:41.278Z] ====================================== 00:06:53.713 [2024-11-21T03:15:41.278Z] busy:2603327660 (cyc) 00:06:53.713 [2024-11-21T03:15:41.278Z] total_run_count: 4026000 00:06:53.713 [2024-11-21T03:15:41.278Z] tsc_hz: 2600000000 (cyc) 00:06:53.713 [2024-11-21T03:15:41.278Z] ====================================== 00:06:53.713 [2024-11-21T03:15:41.278Z] poller_cost: 646 (cyc), 248 (nsec) 00:06:53.713 ************************************ 00:06:53.713 END TEST thread_poller_perf 00:06:53.713 ************************************ 00:06:53.713 00:06:53.713 real 0m1.262s 00:06:53.713 user 0m1.079s 00:06:53.713 sys 0m0.076s 00:06:53.713 03:15:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.713 03:15:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.713 03:15:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.713 ************************************ 00:06:53.713 END TEST thread 00:06:53.713 ************************************ 00:06:53.713 00:06:53.713 real 0m2.801s 00:06:53.713 user 0m2.264s 00:06:53.713 sys 0m0.284s 00:06:53.713 03:15:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.713 03:15:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.713 03:15:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:53.713 03:15:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.713 03:15:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.713 03:15:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.713 03:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:53.713 ************************************ 00:06:53.713 START TEST app_cmdline 00:06:53.713 ************************************ 00:06:53.713 03:15:41 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.972 * Looking for test storage... 00:06:53.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.972 03:15:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.972 03:15:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.973 03:15:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.973 --rc genhtml_branch_coverage=1 00:06:53.973 --rc genhtml_function_coverage=1 00:06:53.973 --rc genhtml_legend=1 00:06:53.973 --rc geninfo_all_blocks=1 00:06:53.973 --rc geninfo_unexecuted_blocks=1 00:06:53.973 00:06:53.973 ' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.973 --rc genhtml_branch_coverage=1 00:06:53.973 --rc genhtml_function_coverage=1 00:06:53.973 --rc genhtml_legend=1 00:06:53.973 --rc geninfo_all_blocks=1 00:06:53.973 --rc geninfo_unexecuted_blocks=1 00:06:53.973 00:06:53.973 ' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.973 --rc genhtml_branch_coverage=1 00:06:53.973 --rc genhtml_function_coverage=1 00:06:53.973 --rc genhtml_legend=1 00:06:53.973 --rc geninfo_all_blocks=1 00:06:53.973 --rc geninfo_unexecuted_blocks=1 00:06:53.973 00:06:53.973 ' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.973 --rc genhtml_branch_coverage=1 00:06:53.973 --rc genhtml_function_coverage=1 00:06:53.973 --rc genhtml_legend=1 00:06:53.973 --rc geninfo_all_blocks=1 00:06:53.973 --rc geninfo_unexecuted_blocks=1 00:06:53.973 00:06:53.973 ' 00:06:53.973 03:15:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.973 03:15:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73144 00:06:53.973 03:15:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73144 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73144 ']' 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.973 03:15:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.973 03:15:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.973 [2024-11-21 03:15:41.489545] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:53.973 [2024-11-21 03:15:41.489653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73144 ] 00:06:54.231 [2024-11-21 03:15:41.621731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.231 [2024-11-21 03:15:41.654262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.231 [2024-11-21 03:15:41.672540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.797 03:15:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.797 03:15:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:54.797 03:15:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:55.055 { 00:06:55.055 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:55.055 "fields": { 00:06:55.055 "major": 25, 00:06:55.055 "minor": 1, 00:06:55.055 "patch": 0, 00:06:55.055 "suffix": "-pre", 00:06:55.055 "commit": "557f022f6" 00:06:55.055 } 00:06:55.055 } 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:55.055 03:15:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.055 03:15:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:55.056 03:15:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.314 request: 00:06:55.314 { 00:06:55.314 "method": "env_dpdk_get_mem_stats", 00:06:55.314 "req_id": 1 00:06:55.314 } 00:06:55.314 Got JSON-RPC error response 00:06:55.314 response: 00:06:55.314 { 00:06:55.314 "code": -32601, 00:06:55.314 "message": "Method not found" 00:06:55.314 } 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.314 03:15:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73144 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73144 ']' 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73144 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73144 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.314 killing process with pid 73144 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73144' 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 73144 00:06:55.314 03:15:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 73144 00:06:55.572 00:06:55.572 real 0m1.747s 00:06:55.572 user 0m2.085s 00:06:55.572 sys 0m0.390s 00:06:55.572 03:15:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.572 03:15:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.572 ************************************ 00:06:55.572 END TEST app_cmdline 00:06:55.572 ************************************ 00:06:55.572 03:15:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.572 03:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.572 03:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.572 03:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:55.572 ************************************ 00:06:55.572 START TEST version 00:06:55.572 ************************************ 00:06:55.572 03:15:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.830 * Looking for test storage... 00:06:55.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.830 03:15:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.830 03:15:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.830 03:15:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.830 03:15:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.830 03:15:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.830 03:15:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.830 03:15:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.830 03:15:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.830 03:15:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.830 03:15:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.830 03:15:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.830 03:15:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.830 03:15:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.830 03:15:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.830 03:15:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.830 03:15:43 version -- scripts/common.sh@344 -- # case "$op" in 00:06:55.830 03:15:43 version -- scripts/common.sh@345 -- # : 1 00:06:55.830 03:15:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.830 03:15:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.830 03:15:43 version -- scripts/common.sh@365 -- # decimal 1 00:06:55.830 03:15:43 version -- scripts/common.sh@353 -- # local d=1 00:06:55.831 03:15:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.831 03:15:43 version -- scripts/common.sh@355 -- # echo 1 00:06:55.831 03:15:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.831 03:15:43 version -- scripts/common.sh@366 -- # decimal 2 00:06:55.831 03:15:43 version -- scripts/common.sh@353 -- # local d=2 00:06:55.831 03:15:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.831 03:15:43 version -- scripts/common.sh@355 -- # echo 2 00:06:55.831 03:15:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.831 03:15:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.831 03:15:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.831 03:15:43 version -- scripts/common.sh@368 -- # return 0 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.831 --rc genhtml_branch_coverage=1 00:06:55.831 --rc genhtml_function_coverage=1 00:06:55.831 --rc genhtml_legend=1 00:06:55.831 --rc geninfo_all_blocks=1 00:06:55.831 --rc geninfo_unexecuted_blocks=1 00:06:55.831 00:06:55.831 ' 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.831 --rc genhtml_branch_coverage=1 00:06:55.831 --rc genhtml_function_coverage=1 00:06:55.831 --rc genhtml_legend=1 00:06:55.831 --rc geninfo_all_blocks=1 00:06:55.831 --rc geninfo_unexecuted_blocks=1 00:06:55.831 00:06:55.831 ' 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.831 --rc genhtml_branch_coverage=1 00:06:55.831 --rc genhtml_function_coverage=1 00:06:55.831 --rc genhtml_legend=1 00:06:55.831 --rc geninfo_all_blocks=1 00:06:55.831 --rc geninfo_unexecuted_blocks=1 00:06:55.831 00:06:55.831 ' 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.831 --rc genhtml_branch_coverage=1 00:06:55.831 --rc genhtml_function_coverage=1 00:06:55.831 --rc genhtml_legend=1 00:06:55.831 --rc geninfo_all_blocks=1 00:06:55.831 --rc geninfo_unexecuted_blocks=1 00:06:55.831 00:06:55.831 ' 00:06:55.831 03:15:43 version -- app/version.sh@17 -- # get_header_version major 00:06:55.831 03:15:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # cut -f2 00:06:55.831 03:15:43 version -- app/version.sh@17 -- # major=25 00:06:55.831 03:15:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.831 03:15:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # cut -f2 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.831 03:15:43 version -- app/version.sh@18 -- # minor=1 00:06:55.831 03:15:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.831 03:15:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # cut -f2 00:06:55.831 03:15:43 version -- app/version.sh@19 -- # patch=0 00:06:55.831 03:15:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.831 03:15:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # cut -f2 00:06:55.831 03:15:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.831 03:15:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.831 03:15:43 version -- app/version.sh@22 -- # version=25.1 00:06:55.831 03:15:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.831 03:15:43 version -- app/version.sh@28 -- # version=25.1rc0 00:06:55.831 03:15:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.831 03:15:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.831 03:15:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:55.831 03:15:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:55.831 00:06:55.831 real 0m0.199s 00:06:55.831 user 0m0.124s 00:06:55.831 sys 0m0.100s 00:06:55.831 ************************************ 00:06:55.831 END TEST version 00:06:55.831 ************************************ 00:06:55.831 03:15:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.831 03:15:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.831 03:15:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:55.831 03:15:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:55.831 03:15:43 -- spdk/autotest.sh@194 -- # uname -s 00:06:55.831 03:15:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:55.831 03:15:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.831 03:15:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.831 03:15:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:55.831 03:15:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:55.831 03:15:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.831 03:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.831 03:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:55.831 ************************************ 00:06:55.831 START TEST blockdev_nvme 00:06:55.831 ************************************ 00:06:55.831 03:15:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:56.089 * Looking for test storage... 00:06:56.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.089 03:15:43 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.089 --rc genhtml_branch_coverage=1 00:06:56.089 --rc genhtml_function_coverage=1 00:06:56.089 --rc genhtml_legend=1 00:06:56.089 --rc geninfo_all_blocks=1 00:06:56.089 --rc geninfo_unexecuted_blocks=1 00:06:56.089 00:06:56.089 ' 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.089 --rc genhtml_branch_coverage=1 00:06:56.089 --rc genhtml_function_coverage=1 00:06:56.089 --rc genhtml_legend=1 00:06:56.089 --rc geninfo_all_blocks=1 00:06:56.089 --rc geninfo_unexecuted_blocks=1 00:06:56.089 00:06:56.089 ' 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.089 --rc genhtml_branch_coverage=1 00:06:56.089 --rc genhtml_function_coverage=1 00:06:56.089 --rc genhtml_legend=1 00:06:56.089 --rc geninfo_all_blocks=1 00:06:56.089 --rc geninfo_unexecuted_blocks=1 00:06:56.089 00:06:56.089 ' 00:06:56.089 03:15:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.089 --rc genhtml_branch_coverage=1 00:06:56.089 --rc genhtml_function_coverage=1 00:06:56.089 --rc genhtml_legend=1 00:06:56.089 --rc geninfo_all_blocks=1 00:06:56.089 --rc geninfo_unexecuted_blocks=1 00:06:56.089 00:06:56.089 ' 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.089 03:15:43 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:56.089 03:15:43 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73305 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73305 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73305 ']' 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.090 03:15:43 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.090 03:15:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 [2024-11-21 03:15:43.577829] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:56.090 [2024-11-21 03:15:43.577979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73305 ] 00:06:56.347 [2024-11-21 03:15:43.710430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.347 [2024-11-21 03:15:43.742214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.347 [2024-11-21 03:15:43.761795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.912 03:15:44 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.912 03:15:44 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.912 03:15:44 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.912 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.912 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 03:15:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:57.479 03:15:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:57.480 03:15:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "39e76bc1-97a8-48f4-825a-072faab1b0f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "39e76bc1-97a8-48f4-825a-072faab1b0f8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "24c515c0-620c-46da-a36e-527b759573fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "24c515c0-620c-46da-a36e-527b759573fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0d930a85-d1e6-49cd-980b-3d8684588770"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d930a85-d1e6-49cd-980b-3d8684588770",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a91ed2e4-99cb-4f32-af1b-46f7ceeda622"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a91ed2e4-99cb-4f32-af1b-46f7ceeda622",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "34a9c9af-4f64-4c4c-bd5f-1cce4d101a8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "34a9c9af-4f64-4c4c-bd5f-1cce4d101a8f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9791a567-d72c-422d-895b-cd9088147aae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9791a567-d72c-422d-895b-cd9088147aae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.480 03:15:44 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.480 03:15:44 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:57.480 03:15:44 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:57.480 03:15:44 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73305 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73305 ']' 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73305 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73305 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.480 killing process with pid 73305 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73305' 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73305 00:06:57.480 03:15:44 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73305 00:06:57.769 03:15:45 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.769 03:15:45 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.769 03:15:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:57.769 03:15:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.769 03:15:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.769 ************************************ 00:06:57.769 START TEST bdev_hello_world 00:06:57.769 ************************************ 00:06:57.769 03:15:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.769 [2024-11-21 03:15:45.258568] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:57.769 [2024-11-21 03:15:45.258696] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:06:58.027 [2024-11-21 03:15:45.390076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.027 [2024-11-21 03:15:45.416651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.027 [2024-11-21 03:15:45.436510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.285 [2024-11-21 03:15:45.809081] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.285 [2024-11-21 03:15:45.809130] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.285 [2024-11-21 03:15:45.809148] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.285 [2024-11-21 03:15:45.811208] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.285 [2024-11-21 03:15:45.812118] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.285 [2024-11-21 03:15:45.812145] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.285 [2024-11-21 03:15:45.812779] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.285 00:06:58.285 [2024-11-21 03:15:45.812806] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:58.543 00:06:58.543 real 0m0.762s 00:06:58.543 user 0m0.502s 00:06:58.543 sys 0m0.157s 00:06:58.543 03:15:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.543 ************************************ 00:06:58.543 03:15:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:58.543 END TEST bdev_hello_world 00:06:58.543 ************************************ 00:06:58.543 03:15:46 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:58.543 03:15:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.543 03:15:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.543 03:15:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.543 ************************************ 00:06:58.543 START TEST bdev_bounds 00:06:58.543 ************************************ 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73398 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73398' 00:06:58.543 Process bdevio pid: 73398 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73398 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73398 ']' 00:06:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.543 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.543 [2024-11-21 03:15:46.083642] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:58.543 [2024-11-21 03:15:46.083766] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73398 ] 00:06:58.800 [2024-11-21 03:15:46.216530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.800 [2024-11-21 03:15:46.244087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.800 [2024-11-21 03:15:46.265992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.800 [2024-11-21 03:15:46.266092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.800 [2024-11-21 03:15:46.266239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.367 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.367 03:15:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:59.367 03:15:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:59.626 I/O targets: 00:06:59.626 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:59.626 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:59.626 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:59.626 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:59.626 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:59.626 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:59.626 00:06:59.626 00:06:59.626 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.626 http://cunit.sourceforge.net/ 00:06:59.626 00:06:59.626 00:06:59.626 Suite: bdevio tests on: Nvme3n1 00:06:59.626 Test: blockdev write read block ...passed 00:06:59.626 Test: blockdev write zeroes read block ...passed 00:06:59.626 Test: blockdev write zeroes read no split ...passed 00:06:59.626 Test: blockdev write zeroes read split ...passed 00:06:59.626 Test: blockdev write zeroes read split partial ...passed 00:06:59.626 Test: blockdev reset ...[2024-11-21 03:15:47.018978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:59.626 passed 00:06:59.626 Test: blockdev write read 8 blocks ...[2024-11-21 03:15:47.022888] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:59.626 passed 00:06:59.626 Test: blockdev write read size > 128k ...passed 00:06:59.626 Test: blockdev write read invalid size ...passed 00:06:59.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.626 Test: blockdev write read max offset ...passed 00:06:59.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.626 Test: blockdev writev readv 8 blocks ...passed 00:06:59.626 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.626 Test: blockdev writev readv block ...passed 00:06:59.626 Test: blockdev writev readv size > 128k ...passed 00:06:59.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.626 Test: blockdev comparev and writev ...[2024-11-21 03:15:47.040143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb606000 len:0x1000 00:06:59.626 [2024-11-21 03:15:47.040208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.626 passed 00:06:59.626 Test: blockdev nvme passthru rw ...passed 00:06:59.626 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.626 Test: blockdev nvme admin passthru ...[2024-11-21 03:15:47.042959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.626 [2024-11-21 03:15:47.042995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.626 passed 00:06:59.626 Test: blockdev copy ...passed 00:06:59.626 Suite: bdevio tests on: Nvme2n3 00:06:59.626 Test: blockdev write read block ...passed 00:06:59.626 Test: blockdev write zeroes read block ...passed 00:06:59.626 Test: blockdev write zeroes read no split ...passed 00:06:59.626 Test: blockdev write zeroes read split ...passed 00:06:59.626 Test: blockdev write zeroes read split partial ...passed 00:06:59.626 Test: blockdev reset ...[2024-11-21 03:15:47.069049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.626 passed 00:06:59.626 Test: blockdev write read 8 blocks ...[2024-11-21 03:15:47.073519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:59.626 passed 00:06:59.626 Test: blockdev write read size > 128k ...passed 00:06:59.626 Test: blockdev write read invalid size ...passed 00:06:59.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.626 Test: blockdev write read max offset ...passed 00:06:59.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.627 Test: blockdev writev readv 8 blocks ...passed 00:06:59.627 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.627 Test: blockdev writev readv block ...passed 00:06:59.627 Test: blockdev writev readv size > 128k ...passed 00:06:59.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.627 Test: blockdev comparev and writev ...[2024-11-21 03:15:47.091180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x300405000 len:0x1000 00:06:59.627 [2024-11-21 03:15:47.091230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.627 passed 00:06:59.627 Test: blockdev nvme passthru rw ...passed 00:06:59.627 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:15:47.093747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:59.627 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:59.627 [2024-11-21 03:15:47.093890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.627 passed 00:06:59.627 Test: blockdev copy ...passed 00:06:59.627 Suite: bdevio tests on: Nvme2n2 00:06:59.627 Test: blockdev write read block ...passed 00:06:59.627 Test: blockdev write zeroes read block ...passed 00:06:59.627 Test: blockdev write zeroes read no split ...passed 00:06:59.627 Test: blockdev write zeroes read split ...passed 00:06:59.627 Test: blockdev write zeroes read split partial ...passed 00:06:59.627 Test: blockdev reset ...[2024-11-21 03:15:47.120813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.627 [2024-11-21 03:15:47.127308] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:59.627 Test: blockdev write read 8 blocks ...uccessful. 00:06:59.627 passed 00:06:59.627 Test: blockdev write read size > 128k ...passed 00:06:59.627 Test: blockdev write read invalid size ...passed 00:06:59.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.627 Test: blockdev write read max offset ...passed 00:06:59.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.627 Test: blockdev writev readv 8 blocks ...passed 00:06:59.627 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.627 Test: blockdev writev readv block ...passed 00:06:59.627 Test: blockdev writev readv size > 128k ...passed 00:06:59.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.627 Test: blockdev comparev and writev ...[2024-11-21 03:15:47.147190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1a36000 len:0x1000 00:06:59.627 [2024-11-21 03:15:47.147229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.627 passed 00:06:59.627 Test: blockdev nvme passthru rw ...passed 00:06:59.627 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.627 Test: blockdev nvme admin passthru ...[2024-11-21 03:15:47.149941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.627 [2024-11-21 03:15:47.149973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.627 passed 00:06:59.627 Test: blockdev copy ...passed 00:06:59.627 Suite: bdevio tests on: Nvme2n1 00:06:59.627 Test: blockdev write read block ...passed 00:06:59.627 Test: blockdev write zeroes read block ...passed 00:06:59.627 Test: blockdev write zeroes read no split ...passed 00:06:59.627 Test: blockdev write zeroes read split ...passed 00:06:59.627 Test: blockdev write zeroes read split partial ...passed 00:06:59.627 Test: blockdev reset ...[2024-11-21 03:15:47.172800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:59.627 passed 00:06:59.627 Test: blockdev write read 8 blocks ...[2024-11-21 03:15:47.175011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:59.627 passed 00:06:59.627 Test: blockdev write read size > 128k ...passed 00:06:59.627 Test: blockdev write read invalid size ...passed 00:06:59.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.627 Test: blockdev write read max offset ...passed 00:06:59.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.627 Test: blockdev writev readv 8 blocks ...passed 00:06:59.627 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.886 Test: blockdev writev readv block ...passed 00:06:59.886 Test: blockdev writev readv size > 128k ...passed 00:06:59.886 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.886 Test: blockdev comparev and writev ...[2024-11-21 03:15:47.192938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1a30000 len:0x1000 00:06:59.886 [2024-11-21 03:15:47.193013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.886 passed 00:06:59.886 Test: blockdev nvme passthru rw ...passed 00:06:59.886 Test: blockdev nvme passthru vendor specific ...passed 00:06:59.886 Test: blockdev nvme admin passthru ...[2024-11-21 03:15:47.195180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:59.886 [2024-11-21 03:15:47.195212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.886 passed 00:06:59.886 Test: blockdev copy ...passed 00:06:59.886 Suite: bdevio tests on: Nvme1n1 00:06:59.886 Test: blockdev write read block ...passed 00:06:59.886 Test: blockdev write zeroes read block ...passed 00:06:59.886 Test: blockdev write zeroes read no split ...passed 00:06:59.886 Test: blockdev write zeroes read split ...passed 00:06:59.886 Test: blockdev write zeroes read split partial ...passed 00:06:59.886 Test: blockdev reset ...[2024-11-21 03:15:47.223091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:59.886 [2024-11-21 03:15:47.226825] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:59.886 Test: blockdev write read 8 blocks ...uccessful. 00:06:59.886 passed 00:06:59.886 Test: blockdev write read size > 128k ...passed 00:06:59.886 Test: blockdev write read invalid size ...passed 00:06:59.886 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.886 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.886 Test: blockdev write read max offset ...passed 00:06:59.886 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.886 Test: blockdev writev readv 8 blocks ...passed 00:06:59.886 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.886 Test: blockdev writev readv block ...passed 00:06:59.886 Test: blockdev writev readv size > 128k ...passed 00:06:59.886 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.886 Test: blockdev comparev and writev ...[2024-11-21 03:15:47.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1a2c000 len:0x1000 00:06:59.886 [2024-11-21 03:15:47.244544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:59.886 passed 00:06:59.886 Test: blockdev nvme passthru rw ...passed 00:06:59.886 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:15:47.247033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:59.886 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:59.886 [2024-11-21 03:15:47.247166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:59.886 passed 00:06:59.886 Test: blockdev copy ...passed 00:06:59.886 Suite: bdevio tests on: Nvme0n1 00:06:59.886 Test: blockdev write read block ...passed 00:06:59.886 Test: blockdev write zeroes read block ...passed 00:06:59.886 Test: blockdev write zeroes read no split ...passed 00:06:59.886 Test: blockdev write zeroes read split ...passed 00:06:59.886 Test: blockdev write zeroes read split partial ...passed 00:06:59.886 Test: blockdev reset ...[2024-11-21 03:15:47.275587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:59.886 [2024-11-21 03:15:47.277280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:59.886 Test: blockdev write read 8 blocks ...uccessful. 00:06:59.886 passed 00:06:59.886 Test: blockdev write read size > 128k ...passed 00:06:59.886 Test: blockdev write read invalid size ...passed 00:06:59.886 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:59.886 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:59.886 Test: blockdev write read max offset ...passed 00:06:59.886 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:59.886 Test: blockdev writev readv 8 blocks ...passed 00:06:59.886 Test: blockdev writev readv 30 x 1block ...passed 00:06:59.886 Test: blockdev writev readv block ...passed 00:06:59.886 Test: blockdev writev readv size > 128k ...passed 00:06:59.886 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:59.886 Test: blockdev comparev and writev ...passed 00:06:59.886 Test: blockdev nvme passthru rw ...[2024-11-21 03:15:47.286876] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:59.886 separate metadata which is not supported yet. 00:06:59.886 passed 00:06:59.886 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:15:47.287618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:59.886 [2024-11-21 03:15:47.287651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:59.886 passed 00:06:59.886 Test: blockdev nvme admin passthru ...passed 00:06:59.886 Test: blockdev copy ...passed 00:06:59.886 00:06:59.886 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.886 suites 6 6 n/a 0 0 00:06:59.886 tests 138 138 138 0 0 00:06:59.886 asserts 893 893 893 0 n/a 00:06:59.886 00:06:59.886 Elapsed time = 0.648 seconds 00:06:59.886 0 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73398 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73398 ']' 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73398 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73398 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73398' 00:06:59.886 killing process with pid 73398 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73398 00:06:59.886 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73398 00:07:00.144 03:15:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:00.144 00:07:00.144 real 0m1.473s 00:07:00.144 user 0m3.655s 00:07:00.144 sys 0m0.281s 00:07:00.144 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.144 ************************************ 00:07:00.144 END TEST bdev_bounds 00:07:00.144 ************************************ 00:07:00.145 03:15:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 03:15:47 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.145 03:15:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.145 03:15:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.145 03:15:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 ************************************ 00:07:00.145 START TEST bdev_nbd 00:07:00.145 ************************************ 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73452 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73452 /var/tmp/spdk-nbd.sock 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73452 ']' 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.145 03:15:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 [2024-11-21 03:15:47.628051] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:00.145 [2024-11-21 03:15:47.628357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.404 [2024-11-21 03:15:47.769346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.404 [2024-11-21 03:15:47.796395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.404 [2024-11-21 03:15:47.816354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:00.970 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.229 1+0 records in 00:07:01.229 1+0 records out 00:07:01.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099413 s, 4.1 MB/s 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.229 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.525 1+0 records in 00:07:01.525 1+0 records out 00:07:01.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571625 s, 7.2 MB/s 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.525 03:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.783 1+0 records in 00:07:01.783 1+0 records out 00:07:01.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503358 s, 8.1 MB/s 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.783 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.040 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.041 1+0 records in 00:07:02.041 1+0 records out 00:07:02.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041613 s, 9.8 MB/s 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.041 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:02.299 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:02.299 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.300 1+0 records in 00:07:02.300 1+0 records out 00:07:02.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377418 s, 10.9 MB/s 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.300 1+0 records in 00:07:02.300 1+0 records out 00:07:02.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312816 s, 13.1 MB/s 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.300 03:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd0", 00:07:02.558 "bdev_name": "Nvme0n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd1", 00:07:02.558 "bdev_name": "Nvme1n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd2", 00:07:02.558 "bdev_name": "Nvme2n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd3", 00:07:02.558 "bdev_name": "Nvme2n2" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd4", 00:07:02.558 "bdev_name": "Nvme2n3" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd5", 00:07:02.558 "bdev_name": "Nvme3n1" 00:07:02.558 } 00:07:02.558 ]' 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd0", 00:07:02.558 "bdev_name": "Nvme0n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd1", 00:07:02.558 "bdev_name": "Nvme1n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd2", 00:07:02.558 "bdev_name": "Nvme2n1" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd3", 00:07:02.558 "bdev_name": "Nvme2n2" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd4", 00:07:02.558 "bdev_name": "Nvme2n3" 00:07:02.558 }, 00:07:02.558 { 00:07:02.558 "nbd_device": "/dev/nbd5", 00:07:02.558 "bdev_name": "Nvme3n1" 00:07:02.558 } 00:07:02.558 ]' 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.558 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.816 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.816 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.817 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.075 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.333 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.591 03:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.591 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.849 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.107 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.108 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.408 /dev/nbd0 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.408 1+0 records in 00:07:04.408 1+0 records out 00:07:04.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628254 s, 6.5 MB/s 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.408 03:15:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:04.688 /dev/nbd1 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.688 1+0 records in 00:07:04.688 1+0 records out 00:07:04.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348547 s, 11.8 MB/s 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.688 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:04.946 /dev/nbd10 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.946 1+0 records in 00:07:04.946 1+0 records out 00:07:04.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358994 s, 11.4 MB/s 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.946 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:05.204 /dev/nbd11 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.204 1+0 records in 00:07:05.204 1+0 records out 00:07:05.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504675 s, 8.1 MB/s 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:05.204 /dev/nbd12 00:07:05.204 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.461 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.461 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:05.461 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.461 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.461 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.462 1+0 records in 00:07:05.462 1+0 records out 00:07:05.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422681 s, 9.7 MB/s 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:05.462 /dev/nbd13 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.462 03:15:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.462 1+0 records in 00:07:05.462 1+0 records out 00:07:05.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403062 s, 10.2 MB/s 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.462 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd0", 00:07:05.719 "bdev_name": "Nvme0n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd1", 00:07:05.719 "bdev_name": "Nvme1n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd10", 00:07:05.719 "bdev_name": "Nvme2n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd11", 00:07:05.719 "bdev_name": "Nvme2n2" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd12", 00:07:05.719 "bdev_name": "Nvme2n3" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd13", 00:07:05.719 "bdev_name": "Nvme3n1" 00:07:05.719 } 00:07:05.719 ]' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd0", 00:07:05.719 "bdev_name": "Nvme0n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd1", 00:07:05.719 "bdev_name": "Nvme1n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd10", 00:07:05.719 "bdev_name": "Nvme2n1" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd11", 00:07:05.719 "bdev_name": "Nvme2n2" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd12", 00:07:05.719 "bdev_name": "Nvme2n3" 00:07:05.719 }, 00:07:05.719 { 00:07:05.719 "nbd_device": "/dev/nbd13", 00:07:05.719 "bdev_name": "Nvme3n1" 00:07:05.719 } 00:07:05.719 ]' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.719 /dev/nbd1 00:07:05.719 /dev/nbd10 00:07:05.719 /dev/nbd11 00:07:05.719 /dev/nbd12 00:07:05.719 /dev/nbd13' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.719 /dev/nbd1 00:07:05.719 /dev/nbd10 00:07:05.719 /dev/nbd11 00:07:05.719 /dev/nbd12 00:07:05.719 /dev/nbd13' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.719 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.720 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.720 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.720 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:05.978 256+0 records in 00:07:05.978 256+0 records out 00:07:05.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105615 s, 99.3 MB/s 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.978 256+0 records in 00:07:05.978 256+0 records out 00:07:05.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0537858 s, 19.5 MB/s 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.978 256+0 records in 00:07:05.978 256+0 records out 00:07:05.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0675182 s, 15.5 MB/s 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:05.978 256+0 records in 00:07:05.978 256+0 records out 00:07:05.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0617364 s, 17.0 MB/s 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:05.978 256+0 records in 00:07:05.978 256+0 records out 00:07:05.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0614541 s, 17.1 MB/s 00:07:05.978 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:06.236 256+0 records in 00:07:06.236 256+0 records out 00:07:06.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0619262 s, 16.9 MB/s 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:06.236 256+0 records in 00:07:06.236 256+0 records out 00:07:06.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638097 s, 16.4 MB/s 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.236 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.495 03:15:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.753 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.011 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.269 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.528 03:15:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:07.786 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:08.045 malloc_lvol_verify 00:07:08.045 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:08.303 b253b6d9-7db7-432d-a940-b08785b31aac 00:07:08.303 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:08.303 b1deb626-b07b-4600-b17e-e5ecc5f30f0a 00:07:08.303 03:15:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:08.561 /dev/nbd0 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:08.561 mke2fs 1.47.0 (5-Feb-2023) 00:07:08.561 Discarding device blocks: 0/4096 done 00:07:08.561 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:08.561 00:07:08.561 Allocating group tables: 0/1 done 00:07:08.561 Writing inode tables: 0/1 done 00:07:08.561 Creating journal (1024 blocks): done 00:07:08.561 Writing superblocks and filesystem accounting information: 0/1 done 00:07:08.561 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.561 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73452 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73452 ']' 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73452 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73452 00:07:08.820 killing process with pid 73452 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73452' 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73452 00:07:08.820 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73452 00:07:09.076 ************************************ 00:07:09.076 END TEST bdev_nbd 00:07:09.076 ************************************ 00:07:09.076 03:15:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:09.076 00:07:09.076 real 0m8.887s 00:07:09.076 user 0m13.092s 00:07:09.076 sys 0m3.022s 00:07:09.076 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.076 03:15:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.076 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.076 03:15:56 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:09.076 03:15:56 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:09.077 03:15:56 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.077 03:15:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.077 03:15:56 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.077 03:15:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:09.077 03:15:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.077 03:15:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.077 ************************************ 00:07:09.077 START TEST bdev_verify 00:07:09.077 ************************************ 00:07:09.077 03:15:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.077 [2024-11-21 03:15:56.540069] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:09.077 [2024-11-21 03:15:56.540181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73808 ] 00:07:09.333 [2024-11-21 03:15:56.674202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.333 [2024-11-21 03:15:56.698753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.333 [2024-11-21 03:15:56.717995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.333 [2024-11-21 03:15:56.718024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.591 Running I/O for 5 seconds... 00:07:11.902 20800.00 IOPS, 81.25 MiB/s [2024-11-21T03:16:00.401Z] 23744.00 IOPS, 92.75 MiB/s [2024-11-21T03:16:01.336Z] 24640.00 IOPS, 96.25 MiB/s [2024-11-21T03:16:02.270Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-21T03:16:02.270Z] 24294.40 IOPS, 94.90 MiB/s 00:07:14.705 Latency(us) 00:07:14.705 [2024-11-21T03:16:02.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.705 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0xbd0bd 00:07:14.705 Nvme0n1 : 5.09 2036.48 7.96 0.00 0.00 62701.26 10233.70 72190.42 00:07:14.705 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:14.705 Nvme0n1 : 5.08 1965.70 7.68 0.00 0.00 64960.22 10384.94 69770.63 00:07:14.705 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0xa0000 00:07:14.705 Nvme1n1 : 5.09 2035.90 7.95 0.00 0.00 62567.50 11695.66 60898.07 00:07:14.705 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0xa0000 length 0xa0000 00:07:14.705 Nvme1n1 : 5.08 1964.27 7.67 0.00 0.00 64880.18 12855.14 66544.25 00:07:14.705 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0x80000 00:07:14.705 Nvme2n1 : 5.09 2035.33 7.95 0.00 0.00 62466.29 12754.31 60091.47 00:07:14.705 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x80000 length 0x80000 00:07:14.705 Nvme2n1 : 5.08 1963.58 7.67 0.00 0.00 64771.19 13510.50 63317.86 00:07:14.705 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0x80000 00:07:14.705 Nvme2n2 : 5.10 2034.13 7.95 0.00 0.00 62360.03 13812.97 62511.26 00:07:14.705 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x80000 length 0x80000 00:07:14.705 Nvme2n2 : 5.09 1963.00 7.67 0.00 0.00 64662.26 14216.27 65737.65 00:07:14.705 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0x80000 00:07:14.705 Nvme2n3 : 5.10 2033.59 7.94 0.00 0.00 62250.09 12048.54 62914.56 00:07:14.705 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x80000 length 0x80000 00:07:14.705 Nvme2n3 : 5.09 1962.48 7.67 0.00 0.00 64541.32 12703.90 69770.63 00:07:14.705 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x0 length 0x20000 00:07:14.705 Nvme3n1 : 5.10 2033.04 7.94 0.00 0.00 62150.05 9074.22 66947.54 00:07:14.705 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:14.705 Verification LBA range: start 0x20000 length 0x20000 00:07:14.705 Nvme3n1 : 5.09 1961.91 7.66 0.00 0.00 64436.22 8771.74 72190.42 00:07:14.705 [2024-11-21T03:16:02.270Z] =================================================================================================================== 00:07:14.705 [2024-11-21T03:16:02.270Z] Total : 23989.42 93.71 0.00 0.00 63540.59 8771.74 72190.42 00:07:15.638 00:07:15.638 real 0m6.451s 00:07:15.638 user 0m12.020s 00:07:15.638 sys 0m0.207s 00:07:15.638 03:16:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.638 03:16:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.638 ************************************ 00:07:15.638 END TEST bdev_verify 00:07:15.638 ************************************ 00:07:15.638 03:16:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:15.638 03:16:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:15.638 03:16:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.638 03:16:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.638 ************************************ 00:07:15.638 START TEST bdev_verify_big_io 00:07:15.638 ************************************ 00:07:15.638 03:16:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:15.638 [2024-11-21 03:16:03.058958] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:15.638 [2024-11-21 03:16:03.059103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73901 ] 00:07:15.638 [2024-11-21 03:16:03.194113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.896 [2024-11-21 03:16:03.225984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.896 [2024-11-21 03:16:03.248741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.896 [2024-11-21 03:16:03.248833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.153 Running I/O for 5 seconds... 00:07:20.580 474.00 IOPS, 29.62 MiB/s [2024-11-21T03:16:09.548Z] 1450.50 IOPS, 90.66 MiB/s [2024-11-21T03:16:09.807Z] 1991.67 IOPS, 124.48 MiB/s 00:07:22.242 Latency(us) 00:07:22.242 [2024-11-21T03:16:09.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.242 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0xbd0b 00:07:22.242 Nvme0n1 : 5.78 106.43 6.65 0.00 0.00 1139569.23 15526.99 1593835.52 00:07:22.242 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:22.242 Nvme0n1 : 5.87 115.57 7.22 0.00 0.00 1057361.13 27021.00 1142141.24 00:07:22.242 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0xa000 00:07:22.242 Nvme1n1 : 5.95 111.91 6.99 0.00 0.00 1057768.45 38313.35 1619646.62 00:07:22.242 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0xa000 length 0xa000 00:07:22.242 Nvme1n1 : 5.78 115.60 7.23 0.00 0.00 1032082.72 89935.56 961463.53 00:07:22.242 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0x8000 00:07:22.242 Nvme2n1 : 5.99 115.51 7.22 0.00 0.00 995795.19 61301.37 1639004.95 00:07:22.242 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x8000 length 0x8000 00:07:22.242 Nvme2n1 : 5.87 119.95 7.50 0.00 0.00 968514.95 85499.27 896935.78 00:07:22.242 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0x8000 00:07:22.242 Nvme2n2 : 5.99 115.82 7.24 0.00 0.00 956079.15 79449.80 1677721.60 00:07:22.242 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x8000 length 0x8000 00:07:22.242 Nvme2n2 : 5.95 124.30 7.77 0.00 0.00 904308.24 46177.67 922746.88 00:07:22.242 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0x8000 00:07:22.242 Nvme2n3 : 6.01 124.02 7.75 0.00 0.00 868433.71 12351.02 1703532.70 00:07:22.242 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x8000 length 0x8000 00:07:22.242 Nvme2n3 : 5.99 127.59 7.97 0.00 0.00 851020.59 35288.62 1129235.69 00:07:22.242 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x0 length 0x2000 00:07:22.242 Nvme3n1 : 6.05 151.48 9.47 0.00 0.00 688604.77 639.61 1561571.64 00:07:22.242 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.242 Verification LBA range: start 0x2000 length 0x2000 00:07:22.242 Nvme3n1 : 5.99 144.29 9.02 0.00 0.00 732259.55 1852.65 1058255.16 00:07:22.242 [2024-11-21T03:16:09.807Z] =================================================================================================================== 00:07:22.242 [2024-11-21T03:16:09.807Z] Total : 1472.46 92.03 0.00 0.00 923355.68 639.61 1703532.70 00:07:23.616 00:07:23.616 real 0m8.070s 00:07:23.616 user 0m14.551s 00:07:23.616 sys 0m0.247s 00:07:23.616 03:16:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.616 ************************************ 00:07:23.616 END TEST bdev_verify_big_io 00:07:23.616 ************************************ 00:07:23.616 03:16:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:23.616 03:16:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.616 03:16:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.616 03:16:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.617 03:16:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.617 ************************************ 00:07:23.617 START TEST bdev_write_zeroes 00:07:23.617 ************************************ 00:07:23.617 03:16:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.617 [2024-11-21 03:16:11.176206] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:23.617 [2024-11-21 03:16:11.176312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:07:23.875 [2024-11-21 03:16:11.307741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.875 [2024-11-21 03:16:11.338772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.875 [2024-11-21 03:16:11.357935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.440 Running I/O for 1 seconds... 00:07:25.377 62592.00 IOPS, 244.50 MiB/s 00:07:25.377 Latency(us) 00:07:25.377 [2024-11-21T03:16:12.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.377 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme0n1 : 1.02 10417.41 40.69 0.00 0.00 12261.88 5847.83 21576.47 00:07:25.377 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme1n1 : 1.02 10404.69 40.64 0.00 0.00 12262.69 8922.98 21778.12 00:07:25.377 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme2n1 : 1.02 10392.38 40.60 0.00 0.00 12235.61 8721.33 20366.57 00:07:25.377 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme2n2 : 1.02 10380.10 40.55 0.00 0.00 12202.72 8620.50 19459.15 00:07:25.377 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme2n3 : 1.02 10367.72 40.50 0.00 0.00 12200.34 8519.68 19761.62 00:07:25.377 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.377 Nvme3n1 : 1.03 10355.48 40.45 0.00 0.00 12188.37 8721.33 21173.17 00:07:25.377 [2024-11-21T03:16:12.942Z] =================================================================================================================== 00:07:25.377 [2024-11-21T03:16:12.942Z] Total : 62317.79 243.43 0.00 0.00 12225.27 5847.83 21778.12 00:07:25.637 00:07:25.637 real 0m1.832s 00:07:25.637 user 0m1.558s 00:07:25.637 sys 0m0.162s 00:07:25.637 03:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.637 03:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.637 ************************************ 00:07:25.637 END TEST bdev_write_zeroes 00:07:25.637 ************************************ 00:07:25.637 03:16:12 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.637 03:16:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.637 03:16:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.637 03:16:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.637 ************************************ 00:07:25.637 START TEST bdev_json_nonenclosed 00:07:25.637 ************************************ 00:07:25.637 03:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.637 [2024-11-21 03:16:13.080531] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:25.637 [2024-11-21 03:16:13.080659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74047 ] 00:07:25.898 [2024-11-21 03:16:13.213852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.898 [2024-11-21 03:16:13.245540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.898 [2024-11-21 03:16:13.264596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.898 [2024-11-21 03:16:13.264675] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.898 [2024-11-21 03:16:13.264691] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.898 [2024-11-21 03:16:13.264703] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.898 00:07:25.898 real 0m0.326s 00:07:25.898 user 0m0.125s 00:07:25.898 sys 0m0.097s 00:07:25.898 03:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.898 ************************************ 00:07:25.898 03:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:25.898 END TEST bdev_json_nonenclosed 00:07:25.898 ************************************ 00:07:25.898 03:16:13 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.898 03:16:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.898 03:16:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.898 03:16:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.898 ************************************ 00:07:25.898 START TEST bdev_json_nonarray 00:07:25.898 ************************************ 00:07:25.898 03:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.160 [2024-11-21 03:16:13.459798] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:26.160 [2024-11-21 03:16:13.459971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:07:26.160 [2024-11-21 03:16:13.596096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.160 [2024-11-21 03:16:13.627087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.160 [2024-11-21 03:16:13.656693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.160 [2024-11-21 03:16:13.656802] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.160 [2024-11-21 03:16:13.656822] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.160 [2024-11-21 03:16:13.656833] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.422 00:07:26.422 real 0m0.351s 00:07:26.422 user 0m0.136s 00:07:26.422 sys 0m0.110s 00:07:26.422 03:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.422 ************************************ 00:07:26.422 END TEST bdev_json_nonarray 00:07:26.422 ************************************ 00:07:26.422 03:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:26.422 03:16:13 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:26.422 00:07:26.422 real 0m30.467s 00:07:26.422 user 0m47.634s 00:07:26.422 sys 0m4.993s 00:07:26.422 03:16:13 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.422 ************************************ 00:07:26.422 END TEST blockdev_nvme 00:07:26.422 ************************************ 00:07:26.422 03:16:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.422 03:16:13 -- spdk/autotest.sh@209 -- # uname -s 00:07:26.422 03:16:13 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:26.422 03:16:13 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.422 03:16:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.422 03:16:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.422 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:07:26.422 ************************************ 00:07:26.422 START TEST blockdev_nvme_gpt 00:07:26.422 ************************************ 00:07:26.422 03:16:13 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.422 * Looking for test storage... 00:07:26.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:26.422 03:16:13 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.422 03:16:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.422 03:16:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.682 03:16:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.682 03:16:13 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.682 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.683 03:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.683 --rc genhtml_branch_coverage=1 00:07:26.683 --rc genhtml_function_coverage=1 00:07:26.683 --rc genhtml_legend=1 00:07:26.683 --rc geninfo_all_blocks=1 00:07:26.683 --rc geninfo_unexecuted_blocks=1 00:07:26.683 00:07:26.683 ' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.683 --rc genhtml_branch_coverage=1 00:07:26.683 --rc genhtml_function_coverage=1 00:07:26.683 --rc genhtml_legend=1 00:07:26.683 --rc geninfo_all_blocks=1 00:07:26.683 --rc geninfo_unexecuted_blocks=1 00:07:26.683 00:07:26.683 ' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.683 --rc genhtml_branch_coverage=1 00:07:26.683 --rc genhtml_function_coverage=1 00:07:26.683 --rc genhtml_legend=1 00:07:26.683 --rc geninfo_all_blocks=1 00:07:26.683 --rc geninfo_unexecuted_blocks=1 00:07:26.683 00:07:26.683 ' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.683 --rc genhtml_branch_coverage=1 00:07:26.683 --rc genhtml_function_coverage=1 00:07:26.683 --rc genhtml_legend=1 00:07:26.683 --rc geninfo_all_blocks=1 00:07:26.683 --rc geninfo_unexecuted_blocks=1 00:07:26.683 00:07:26.683 ' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74147 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74147 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74147 ']' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.683 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.683 [2024-11-21 03:16:14.129703] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:26.683 [2024-11-21 03:16:14.129844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74147 ] 00:07:26.944 [2024-11-21 03:16:14.267120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.944 [2024-11-21 03:16:14.298084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.944 [2024-11-21 03:16:14.317194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.515 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.515 03:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:27.515 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:27.515 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:27.515 03:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.776 Waiting for block devices as requested 00:07:28.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.034 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.034 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.320 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:33.320 BYT; 00:07:33.320 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:33.320 BYT; 00:07:33.320 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.320 03:16:20 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.320 03:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:34.277 The operation has completed successfully. 00:07:34.277 03:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:35.212 The operation has completed successfully. 00:07:35.212 03:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.037 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.295 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:36.295 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.295 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.295 [] 00:07:36.295 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.295 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:36.296 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.296 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:36.555 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.555 03:16:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:36.555 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:36.555 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:36.556 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "961bb18c-1048-445a-ab8d-1aae976e5fdd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "961bb18c-1048-445a-ab8d-1aae976e5fdd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b91e0524-6202-4bd7-b47c-0ded6df3a345"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b91e0524-6202-4bd7-b47c-0ded6df3a345",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0b8153c9-249b-4d81-aadd-78d5f51d3d38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0b8153c9-249b-4d81-aadd-78d5f51d3d38",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c7340454-8a55-43ec-b873-291a27fbee24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c7340454-8a55-43ec-b873-291a27fbee24",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bcbd09ae-f08f-42ba-b25d-e8befa4e0353"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bcbd09ae-f08f-42ba-b25d-e8befa4e0353",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:36.814 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:36.814 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:36.814 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:36.814 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74147 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74147 ']' 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74147 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74147 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.814 killing process with pid 74147 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74147' 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74147 00:07:36.814 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74147 00:07:37.072 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:37.073 03:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.073 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:37.073 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.073 03:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.073 ************************************ 00:07:37.073 START TEST bdev_hello_world 00:07:37.073 ************************************ 00:07:37.073 03:16:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.073 [2024-11-21 03:16:24.487004] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:37.073 [2024-11-21 03:16:24.487155] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74760 ] 00:07:37.332 [2024-11-21 03:16:24.634815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.332 [2024-11-21 03:16:24.659114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.332 [2024-11-21 03:16:24.676772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.590 [2024-11-21 03:16:25.035423] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:37.590 [2024-11-21 03:16:25.035458] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:37.591 [2024-11-21 03:16:25.035476] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:37.591 [2024-11-21 03:16:25.037132] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:37.591 [2024-11-21 03:16:25.037420] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:37.591 [2024-11-21 03:16:25.037442] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:37.591 [2024-11-21 03:16:25.037660] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:37.591 00:07:37.591 [2024-11-21 03:16:25.037677] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:37.852 00:07:37.852 real 0m0.752s 00:07:37.852 user 0m0.483s 00:07:37.852 sys 0m0.167s 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:37.852 ************************************ 00:07:37.852 END TEST bdev_hello_world 00:07:37.852 ************************************ 00:07:37.852 03:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:37.852 03:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.852 03:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.852 03:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.852 ************************************ 00:07:37.852 START TEST bdev_bounds 00:07:37.852 ************************************ 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74787 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.852 Process bdevio pid: 74787 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74787' 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74787 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74787 ']' 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.852 03:16:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:37.852 [2024-11-21 03:16:25.267734] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:37.852 [2024-11-21 03:16:25.267863] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74787 ] 00:07:37.852 [2024-11-21 03:16:25.400539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.114 [2024-11-21 03:16:25.426167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.114 [2024-11-21 03:16:25.445712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.114 [2024-11-21 03:16:25.445891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.114 [2024-11-21 03:16:25.445997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.688 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.688 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:38.688 03:16:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:38.688 I/O targets: 00:07:38.688 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:38.688 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:38.688 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:38.688 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.688 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.688 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.688 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:38.688 00:07:38.688 00:07:38.688 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.688 http://cunit.sourceforge.net/ 00:07:38.688 00:07:38.688 00:07:38.688 Suite: bdevio tests on: Nvme3n1 00:07:38.688 Test: blockdev write read block ...passed 00:07:38.688 Test: blockdev write zeroes read block ...passed 00:07:38.688 Test: blockdev write zeroes read no split ...passed 00:07:38.688 Test: blockdev write zeroes read split ...passed 00:07:38.688 Test: blockdev write zeroes read split partial ...passed 00:07:38.688 Test: blockdev reset ...[2024-11-21 03:16:26.194566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:38.688 passed 00:07:38.688 Test: blockdev write read 8 blocks ...[2024-11-21 03:16:26.196322] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:38.688 passed 00:07:38.688 Test: blockdev write read size > 128k ...passed 00:07:38.688 Test: blockdev write read invalid size ...passed 00:07:38.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.688 Test: blockdev write read max offset ...passed 00:07:38.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.688 Test: blockdev writev readv 8 blocks ...passed 00:07:38.688 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.688 Test: blockdev writev readv block ...passed 00:07:38.688 Test: blockdev writev readv size > 128k ...passed 00:07:38.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.688 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.201289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c420e000 len:0x1000 00:07:38.688 [2024-11-21 03:16:26.201336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.688 passed 00:07:38.688 Test: blockdev nvme passthru rw ...passed 00:07:38.688 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:16:26.201777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.688 [2024-11-21 03:16:26.201802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.688 passed 00:07:38.688 Test: blockdev nvme admin passthru ...passed 00:07:38.688 Test: blockdev copy ...passed 00:07:38.688 Suite: bdevio tests on: Nvme2n3 00:07:38.688 Test: blockdev write read block ...passed 00:07:38.688 Test: blockdev write zeroes read block ...passed 00:07:38.688 Test: blockdev write zeroes read no split ...passed 00:07:38.688 Test: blockdev write zeroes read split ...passed 00:07:38.688 Test: blockdev write zeroes read split partial ...passed 00:07:38.688 Test: blockdev reset ...[2024-11-21 03:16:26.217513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.688 [2024-11-21 03:16:26.219282] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.688 passed 00:07:38.688 Test: blockdev write read 8 blocks ...passed 00:07:38.688 Test: blockdev write read size > 128k ...passed 00:07:38.688 Test: blockdev write read invalid size ...passed 00:07:38.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.688 Test: blockdev write read max offset ...passed 00:07:38.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.688 Test: blockdev writev readv 8 blocks ...passed 00:07:38.688 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.688 Test: blockdev writev readv block ...passed 00:07:38.688 Test: blockdev writev readv size > 128k ...passed 00:07:38.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.688 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.223564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c420a000 len:0x1000 00:07:38.688 [2024-11-21 03:16:26.223609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.688 passed 00:07:38.688 Test: blockdev nvme passthru rw ...passed 00:07:38.688 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.688 Test: blockdev nvme admin passthru ...[2024-11-21 03:16:26.224258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.688 [2024-11-21 03:16:26.224285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.688 passed 00:07:38.689 Test: blockdev copy ...passed 00:07:38.689 Suite: bdevio tests on: Nvme2n2 00:07:38.689 Test: blockdev write read block ...passed 00:07:38.689 Test: blockdev write zeroes read block ...passed 00:07:38.689 Test: blockdev write zeroes read no split ...passed 00:07:38.689 Test: blockdev write zeroes read split ...passed 00:07:38.689 Test: blockdev write zeroes read split partial ...passed 00:07:38.689 Test: blockdev reset ...[2024-11-21 03:16:26.239344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.689 [2024-11-21 03:16:26.240834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.689 passed 00:07:38.689 Test: blockdev write read 8 blocks ...passed 00:07:38.689 Test: blockdev write read size > 128k ...passed 00:07:38.689 Test: blockdev write read invalid size ...passed 00:07:38.689 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.689 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.689 Test: blockdev write read max offset ...passed 00:07:38.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.689 Test: blockdev writev readv 8 blocks ...passed 00:07:38.689 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.689 Test: blockdev writev readv block ...passed 00:07:38.689 Test: blockdev writev readv size > 128k ...passed 00:07:38.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.689 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.245592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ade05000 len:0x1000 00:07:38.689 passed 00:07:38.689 Test: blockdev nvme passthru rw ...[2024-11-21 03:16:26.245631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.689 passed 00:07:38.689 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:16:26.246062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.689 [2024-11-21 03:16:26.246087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.689 passed 00:07:38.949 Test: blockdev nvme admin passthru ...passed 00:07:38.949 Test: blockdev copy ...passed 00:07:38.949 Suite: bdevio tests on: Nvme2n1 00:07:38.949 Test: blockdev write read block ...passed 00:07:38.949 Test: blockdev write zeroes read block ...passed 00:07:38.949 Test: blockdev write zeroes read no split ...passed 00:07:38.949 Test: blockdev write zeroes read split ...passed 00:07:38.949 Test: blockdev write zeroes read split partial ...passed 00:07:38.949 Test: blockdev reset ...[2024-11-21 03:16:26.260288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.949 [2024-11-21 03:16:26.261778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.949 passed 00:07:38.949 Test: blockdev write read 8 blocks ...passed 00:07:38.949 Test: blockdev write read size > 128k ...passed 00:07:38.949 Test: blockdev write read invalid size ...passed 00:07:38.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.949 Test: blockdev write read max offset ...passed 00:07:38.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.949 Test: blockdev writev readv 8 blocks ...passed 00:07:38.949 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.949 Test: blockdev writev readv block ...passed 00:07:38.949 Test: blockdev writev readv size > 128k ...passed 00:07:38.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.949 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.266396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4602000 len:0x1000 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme passthru rw ...[2024-11-21 03:16:26.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme passthru vendor specific ...[2024-11-21 03:16:26.266873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.949 [2024-11-21 03:16:26.266905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme admin passthru ...passed 00:07:38.949 Test: blockdev copy ...passed 00:07:38.949 Suite: bdevio tests on: Nvme1n1p2 00:07:38.949 Test: blockdev write read block ...passed 00:07:38.949 Test: blockdev write zeroes read block ...passed 00:07:38.949 Test: blockdev write zeroes read no split ...passed 00:07:38.949 Test: blockdev write zeroes read split ...passed 00:07:38.949 Test: blockdev write zeroes read split partial ...passed 00:07:38.949 Test: blockdev reset ...[2024-11-21 03:16:26.283669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:38.949 [2024-11-21 03:16:26.284961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:38.949 passed 00:07:38.949 Test: blockdev write read 8 blocks ...passed 00:07:38.949 Test: blockdev write read size > 128k ...passed 00:07:38.949 Test: blockdev write read invalid size ...passed 00:07:38.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.949 Test: blockdev write read max offset ...passed 00:07:38.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.949 Test: blockdev writev readv 8 blocks ...passed 00:07:38.949 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.949 Test: blockdev writev readv block ...passed 00:07:38.949 Test: blockdev writev readv size > 128k ...passed 00:07:38.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.949 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.289493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dfe3b000 len:0x1000 00:07:38.949 [2024-11-21 03:16:26.289532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme passthru rw ...passed 00:07:38.949 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.949 Test: blockdev nvme admin passthru ...passed 00:07:38.949 Test: blockdev copy ...passed 00:07:38.949 Suite: bdevio tests on: Nvme1n1p1 00:07:38.949 Test: blockdev write read block ...passed 00:07:38.949 Test: blockdev write zeroes read block ...passed 00:07:38.949 Test: blockdev write zeroes read no split ...passed 00:07:38.949 Test: blockdev write zeroes read split ...passed 00:07:38.949 Test: blockdev write zeroes read split partial ...passed 00:07:38.949 Test: blockdev reset ...[2024-11-21 03:16:26.301013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:38.949 [2024-11-21 03:16:26.302219] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:38.949 passed 00:07:38.949 Test: blockdev write read 8 blocks ...passed 00:07:38.949 Test: blockdev write read size > 128k ...passed 00:07:38.949 Test: blockdev write read invalid size ...passed 00:07:38.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.949 Test: blockdev write read max offset ...passed 00:07:38.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.949 Test: blockdev writev readv 8 blocks ...passed 00:07:38.949 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.949 Test: blockdev writev readv block ...passed 00:07:38.949 Test: blockdev writev readv size > 128k ...passed 00:07:38.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.949 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.306742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2dfe37000 len:0x1000 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme passthru rw ...passed 00:07:38.949 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.949 Test: blockdev nvme admin passthru ...passed 00:07:38.949 Test: blockdev copy ...[2024-11-21 03:16:26.306782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.949 passed 00:07:38.949 Suite: bdevio tests on: Nvme0n1 00:07:38.949 Test: blockdev write read block ...passed 00:07:38.949 Test: blockdev write zeroes read block ...passed 00:07:38.949 Test: blockdev write zeroes read no split ...passed 00:07:38.949 Test: blockdev write zeroes read split ...passed 00:07:38.949 Test: blockdev write zeroes read split partial ...passed 00:07:38.949 Test: blockdev reset ...[2024-11-21 03:16:26.317782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:38.949 [2024-11-21 03:16:26.319083] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:38.949 passed 00:07:38.949 Test: blockdev write read 8 blocks ...passed 00:07:38.949 Test: blockdev write read size > 128k ...passed 00:07:38.949 Test: blockdev write read invalid size ...passed 00:07:38.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.949 Test: blockdev write read max offset ...passed 00:07:38.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.949 Test: blockdev writev readv 8 blocks ...passed 00:07:38.949 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.949 Test: blockdev writev readv block ...passed 00:07:38.949 Test: blockdev writev readv size > 128k ...passed 00:07:38.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.949 Test: blockdev comparev and writev ...[2024-11-21 03:16:26.322662] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:38.949 separate metadata which is not supported yet. 00:07:38.949 passed 00:07:38.949 Test: blockdev nvme passthru rw ...passed 00:07:38.949 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.949 Test: blockdev nvme admin passthru ...[2024-11-21 03:16:26.322993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:38.950 [2024-11-21 03:16:26.323024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:38.950 passed 00:07:38.950 Test: blockdev copy ...passed 00:07:38.950 00:07:38.950 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.950 suites 7 7 n/a 0 0 00:07:38.950 tests 161 161 161 0 0 00:07:38.950 asserts 1025 1025 1025 0 n/a 00:07:38.950 00:07:38.950 Elapsed time = 0.337 seconds 00:07:38.950 0 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74787 ']' 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.950 killing process with pid 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74787' 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74787 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:38.950 00:07:38.950 real 0m1.293s 00:07:38.950 user 0m3.317s 00:07:38.950 sys 0m0.256s 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.950 03:16:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:38.950 ************************************ 00:07:38.950 END TEST bdev_bounds 00:07:38.950 ************************************ 00:07:39.207 03:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:39.207 03:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.207 03:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.207 03:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.207 ************************************ 00:07:39.207 START TEST bdev_nbd 00:07:39.207 ************************************ 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74835 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74835 /var/tmp/spdk-nbd.sock 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74835 ']' 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:39.207 03:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:39.207 [2024-11-21 03:16:26.608908] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:39.207 [2024-11-21 03:16:26.609022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.207 [2024-11-21 03:16:26.740676] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.207 [2024-11-21 03:16:26.761995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.465 [2024-11-21 03:16:26.779715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.032 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.291 1+0 records in 00:07:40.291 1+0 records out 00:07:40.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251955 s, 16.3 MB/s 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.291 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.551 1+0 records in 00:07:40.551 1+0 records out 00:07:40.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347367 s, 11.8 MB/s 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.551 03:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:40.809 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:40.809 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:40.809 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:40.809 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.810 1+0 records in 00:07:40.810 1+0 records out 00:07:40.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351985 s, 11.6 MB/s 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.810 1+0 records in 00:07:40.810 1+0 records out 00:07:40.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358915 s, 11.4 MB/s 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.810 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.068 1+0 records in 00:07:41.068 1+0 records out 00:07:41.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368034 s, 11.1 MB/s 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.068 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.069 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.069 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.327 1+0 records in 00:07:41.327 1+0 records out 00:07:41.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580536 s, 7.1 MB/s 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.327 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.586 1+0 records in 00:07:41.586 1+0 records out 00:07:41.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455981 s, 9.0 MB/s 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.586 03:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.851 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:41.851 { 00:07:41.851 "nbd_device": "/dev/nbd0", 00:07:41.851 "bdev_name": "Nvme0n1" 00:07:41.851 }, 00:07:41.851 { 00:07:41.851 "nbd_device": "/dev/nbd1", 00:07:41.851 "bdev_name": "Nvme1n1p1" 00:07:41.851 }, 00:07:41.851 { 00:07:41.851 "nbd_device": "/dev/nbd2", 00:07:41.851 "bdev_name": "Nvme1n1p2" 00:07:41.851 }, 00:07:41.851 { 00:07:41.851 "nbd_device": "/dev/nbd3", 00:07:41.851 "bdev_name": "Nvme2n1" 00:07:41.851 }, 00:07:41.851 { 00:07:41.851 "nbd_device": "/dev/nbd4", 00:07:41.851 "bdev_name": "Nvme2n2" 00:07:41.851 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd5", 00:07:41.852 "bdev_name": "Nvme2n3" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd6", 00:07:41.852 "bdev_name": "Nvme3n1" 00:07:41.852 } 00:07:41.852 ]' 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd0", 00:07:41.852 "bdev_name": "Nvme0n1" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd1", 00:07:41.852 "bdev_name": "Nvme1n1p1" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd2", 00:07:41.852 "bdev_name": "Nvme1n1p2" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd3", 00:07:41.852 "bdev_name": "Nvme2n1" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd4", 00:07:41.852 "bdev_name": "Nvme2n2" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd5", 00:07:41.852 "bdev_name": "Nvme2n3" 00:07:41.852 }, 00:07:41.852 { 00:07:41.852 "nbd_device": "/dev/nbd6", 00:07:41.852 "bdev_name": "Nvme3n1" 00:07:41.852 } 00:07:41.852 ]' 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.852 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.115 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.373 03:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.631 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.890 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.148 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.406 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.665 03:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:43.665 /dev/nbd0 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.665 1+0 records in 00:07:43.665 1+0 records out 00:07:43.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258667 s, 15.8 MB/s 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.665 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:43.923 /dev/nbd1 00:07:43.923 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.923 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.924 1+0 records in 00:07:43.924 1+0 records out 00:07:43.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355778 s, 11.5 MB/s 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.924 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:44.185 /dev/nbd10 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.185 1+0 records in 00:07:44.185 1+0 records out 00:07:44.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357737 s, 11.4 MB/s 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.185 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:44.461 /dev/nbd11 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.461 1+0 records in 00:07:44.461 1+0 records out 00:07:44.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352921 s, 11.6 MB/s 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.461 03:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:44.776 /dev/nbd12 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.776 1+0 records in 00:07:44.776 1+0 records out 00:07:44.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299065 s, 13.7 MB/s 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.776 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:44.776 /dev/nbd13 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.035 1+0 records in 00:07:45.035 1+0 records out 00:07:45.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467048 s, 8.8 MB/s 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:45.035 /dev/nbd14 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.035 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.036 1+0 records in 00:07:45.036 1+0 records out 00:07:45.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429794 s, 9.5 MB/s 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.036 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.296 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.296 { 00:07:45.296 "nbd_device": "/dev/nbd0", 00:07:45.296 "bdev_name": "Nvme0n1" 00:07:45.296 }, 00:07:45.296 { 00:07:45.296 "nbd_device": "/dev/nbd1", 00:07:45.296 "bdev_name": "Nvme1n1p1" 00:07:45.296 }, 00:07:45.296 { 00:07:45.296 "nbd_device": "/dev/nbd10", 00:07:45.296 "bdev_name": "Nvme1n1p2" 00:07:45.296 }, 00:07:45.296 { 00:07:45.296 "nbd_device": "/dev/nbd11", 00:07:45.296 "bdev_name": "Nvme2n1" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd12", 00:07:45.297 "bdev_name": "Nvme2n2" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd13", 00:07:45.297 "bdev_name": "Nvme2n3" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd14", 00:07:45.297 "bdev_name": "Nvme3n1" 00:07:45.297 } 00:07:45.297 ]' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd0", 00:07:45.297 "bdev_name": "Nvme0n1" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd1", 00:07:45.297 "bdev_name": "Nvme1n1p1" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd10", 00:07:45.297 "bdev_name": "Nvme1n1p2" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd11", 00:07:45.297 "bdev_name": "Nvme2n1" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd12", 00:07:45.297 "bdev_name": "Nvme2n2" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd13", 00:07:45.297 "bdev_name": "Nvme2n3" 00:07:45.297 }, 00:07:45.297 { 00:07:45.297 "nbd_device": "/dev/nbd14", 00:07:45.297 "bdev_name": "Nvme3n1" 00:07:45.297 } 00:07:45.297 ]' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.297 /dev/nbd1 00:07:45.297 /dev/nbd10 00:07:45.297 /dev/nbd11 00:07:45.297 /dev/nbd12 00:07:45.297 /dev/nbd13 00:07:45.297 /dev/nbd14' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.297 /dev/nbd1 00:07:45.297 /dev/nbd10 00:07:45.297 /dev/nbd11 00:07:45.297 /dev/nbd12 00:07:45.297 /dev/nbd13 00:07:45.297 /dev/nbd14' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.297 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:45.559 256+0 records in 00:07:45.559 256+0 records out 00:07:45.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01236 s, 84.8 MB/s 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.559 256+0 records in 00:07:45.559 256+0 records out 00:07:45.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0592617 s, 17.7 MB/s 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.559 256+0 records in 00:07:45.559 256+0 records out 00:07:45.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0602812 s, 17.4 MB/s 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.559 03:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:45.559 256+0 records in 00:07:45.559 256+0 records out 00:07:45.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0621401 s, 16.9 MB/s 00:07:45.559 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.559 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:45.559 256+0 records in 00:07:45.559 256+0 records out 00:07:45.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0549985 s, 19.1 MB/s 00:07:45.559 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.559 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:45.820 256+0 records in 00:07:45.820 256+0 records out 00:07:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0547166 s, 19.2 MB/s 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:45.820 256+0 records in 00:07:45.820 256+0 records out 00:07:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0581075 s, 18.0 MB/s 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:45.820 256+0 records in 00:07:45.820 256+0 records out 00:07:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562018 s, 18.7 MB/s 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.820 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.081 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.343 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.605 03:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.605 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.866 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.127 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.128 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.388 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:47.648 03:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:47.648 malloc_lvol_verify 00:07:47.648 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:47.908 8af10b40-f8d5-465b-b8e6-9952d08f967a 00:07:47.908 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:48.167 0567a7d3-3f94-4c65-b256-778af238a516 00:07:48.167 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:48.428 /dev/nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:48.428 mke2fs 1.47.0 (5-Feb-2023) 00:07:48.428 Discarding device blocks: 0/4096 done 00:07:48.428 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:48.428 00:07:48.428 Allocating group tables: 0/1 done 00:07:48.428 Writing inode tables: 0/1 done 00:07:48.428 Creating journal (1024 blocks): done 00:07:48.428 Writing superblocks and filesystem accounting information: 0/1 done 00:07:48.428 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74835 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74835 ']' 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74835 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74835 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.428 killing process with pid 74835 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74835' 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74835 00:07:48.428 03:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74835 00:07:48.689 03:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:48.689 00:07:48.689 real 0m9.597s 00:07:48.689 user 0m14.087s 00:07:48.689 sys 0m3.307s 00:07:48.689 03:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.689 ************************************ 00:07:48.689 03:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:48.689 END TEST bdev_nbd 00:07:48.689 ************************************ 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:48.689 skipping fio tests on NVMe due to multi-ns failures. 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:48.689 03:16:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:48.689 03:16:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:48.689 03:16:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.689 03:16:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.689 ************************************ 00:07:48.689 START TEST bdev_verify 00:07:48.689 ************************************ 00:07:48.689 03:16:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:48.949 [2024-11-21 03:16:36.262651] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:48.949 [2024-11-21 03:16:36.262765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75232 ] 00:07:48.949 [2024-11-21 03:16:36.394229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.949 [2024-11-21 03:16:36.418262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.949 [2024-11-21 03:16:36.436183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.949 [2024-11-21 03:16:36.436239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.520 Running I/O for 5 seconds... 00:07:51.845 23872.00 IOPS, 93.25 MiB/s [2024-11-21T03:16:40.351Z] 23040.00 IOPS, 90.00 MiB/s [2024-11-21T03:16:41.295Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-21T03:16:42.231Z] 22368.00 IOPS, 87.38 MiB/s [2024-11-21T03:16:42.231Z] 22080.00 IOPS, 86.25 MiB/s 00:07:54.666 Latency(us) 00:07:54.666 [2024-11-21T03:16:42.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.666 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0xbd0bd 00:07:54.666 Nvme0n1 : 5.06 1542.96 6.03 0.00 0.00 82563.56 16131.94 93161.94 00:07:54.666 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:54.666 Nvme0n1 : 5.04 1550.67 6.06 0.00 0.00 82240.52 13308.85 91952.05 00:07:54.666 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x4ff80 00:07:54.666 Nvme1n1p1 : 5.06 1542.48 6.03 0.00 0.00 82318.53 18955.03 80256.39 00:07:54.666 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:54.666 Nvme1n1p1 : 5.07 1553.35 6.07 0.00 0.00 81878.25 6805.66 82676.18 00:07:54.666 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x4ff7f 00:07:54.666 Nvme1n1p2 : 5.08 1548.64 6.05 0.00 0.00 81936.08 7158.55 72997.02 00:07:54.666 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:54.666 Nvme1n1p2 : 5.07 1552.88 6.07 0.00 0.00 81730.49 6427.57 76626.71 00:07:54.666 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x80000 00:07:54.666 Nvme2n1 : 5.08 1548.24 6.05 0.00 0.00 81816.19 7713.08 73400.32 00:07:54.666 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x80000 length 0x80000 00:07:54.666 Nvme2n1 : 5.09 1560.31 6.09 0.00 0.00 81363.67 14216.27 72190.42 00:07:54.666 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x80000 00:07:54.666 Nvme2n2 : 5.09 1547.78 6.05 0.00 0.00 81678.54 8065.97 74610.22 00:07:54.666 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x80000 length 0x80000 00:07:54.666 Nvme2n2 : 5.09 1559.29 6.09 0.00 0.00 81207.71 15627.82 73803.62 00:07:54.666 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x80000 00:07:54.666 Nvme2n3 : 5.10 1556.89 6.08 0.00 0.00 81165.63 7158.55 77030.01 00:07:54.666 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x80000 length 0x80000 00:07:54.666 Nvme2n3 : 5.09 1558.85 6.09 0.00 0.00 81051.58 15930.29 75416.81 00:07:54.666 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.666 Verification LBA range: start 0x0 length 0x20000 00:07:54.666 Nvme3n1 : 5.10 1556.48 6.08 0.00 0.00 81008.94 7410.61 77030.01 00:07:54.667 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.667 Verification LBA range: start 0x20000 length 0x20000 00:07:54.667 Nvme3n1 : 5.09 1558.44 6.09 0.00 0.00 80893.14 13107.20 77433.30 00:07:54.667 [2024-11-21T03:16:42.232Z] =================================================================================================================== 00:07:54.667 [2024-11-21T03:16:42.232Z] Total : 21737.26 84.91 0.00 0.00 81629.26 6427.57 93161.94 00:07:55.604 00:07:55.604 real 0m6.959s 00:07:55.604 user 0m12.392s 00:07:55.604 sys 0m0.237s 00:07:55.604 03:16:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.604 03:16:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:55.604 ************************************ 00:07:55.604 END TEST bdev_verify 00:07:55.604 ************************************ 00:07:55.862 03:16:43 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.862 03:16:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:55.862 03:16:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.862 03:16:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.862 ************************************ 00:07:55.862 START TEST bdev_verify_big_io 00:07:55.862 ************************************ 00:07:55.862 03:16:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.862 [2024-11-21 03:16:43.306006] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:55.862 [2024-11-21 03:16:43.306152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75325 ] 00:07:56.121 [2024-11-21 03:16:43.442857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.121 [2024-11-21 03:16:43.471921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:56.121 [2024-11-21 03:16:43.503422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.121 [2024-11-21 03:16:43.503476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.691 Running I/O for 5 seconds... 00:08:01.904 1093.00 IOPS, 68.31 MiB/s [2024-11-21T03:16:50.041Z] 2565.50 IOPS, 160.34 MiB/s [2024-11-21T03:16:50.302Z] 3081.33 IOPS, 192.58 MiB/s 00:08:02.737 Latency(us) 00:08:02.737 [2024-11-21T03:16:50.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.737 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0xbd0b 00:08:02.737 Nvme0n1 : 5.88 119.73 7.48 0.00 0.00 1018752.68 22181.42 1109877.37 00:08:02.737 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:02.737 Nvme0n1 : 5.92 132.16 8.26 0.00 0.00 831709.99 35086.97 1271196.75 00:08:02.737 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x4ff8 00:08:02.737 Nvme1n1p1 : 5.94 123.06 7.69 0.00 0.00 978647.58 62914.56 987274.63 00:08:02.737 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:02.737 Nvme1n1p1 : 6.01 138.37 8.65 0.00 0.00 772824.74 8318.03 1690627.15 00:08:02.737 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x4ff7 00:08:02.737 Nvme1n1p2 : 5.94 111.59 6.97 0.00 0.00 1051445.02 60898.07 1780966.01 00:08:02.737 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:02.737 Nvme1n1p2 : 6.08 193.61 12.10 0.00 0.00 540048.18 267.82 1374441.16 00:08:02.737 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x8000 00:08:02.737 Nvme2n1 : 5.89 123.83 7.74 0.00 0.00 925035.00 62511.26 1025991.29 00:08:02.737 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x8000 length 0x8000 00:08:02.737 Nvme2n1 : 5.82 113.98 7.12 0.00 0.00 1068322.42 34885.32 1639004.95 00:08:02.737 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x8000 00:08:02.737 Nvme2n2 : 5.95 129.18 8.07 0.00 0.00 866172.72 55251.89 929199.66 00:08:02.737 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x8000 length 0x8000 00:08:02.737 Nvme2n2 : 5.82 123.56 7.72 0.00 0.00 971864.87 53235.40 1187310.67 00:08:02.737 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x8000 00:08:02.737 Nvme2n3 : 6.00 132.04 8.25 0.00 0.00 821420.43 50412.31 948557.98 00:08:02.737 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x8000 length 0x8000 00:08:02.737 Nvme2n3 : 5.88 121.99 7.62 0.00 0.00 943307.79 89532.26 1213121.77 00:08:02.737 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x0 length 0x2000 00:08:02.737 Nvme3n1 : 6.01 145.42 9.09 0.00 0.00 729803.40 1915.67 974369.08 00:08:02.737 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.737 Verification LBA range: start 0x2000 length 0x2000 00:08:02.737 Nvme3n1 : 5.88 122.12 7.63 0.00 0.00 926750.09 57268.38 1755154.90 00:08:02.737 [2024-11-21T03:16:50.302Z] =================================================================================================================== 00:08:02.737 [2024-11-21T03:16:50.302Z] Total : 1830.63 114.41 0.00 0.00 868114.51 267.82 1780966.01 00:08:03.678 00:08:03.678 real 0m7.786s 00:08:03.678 user 0m14.240s 00:08:03.678 sys 0m0.294s 00:08:03.678 03:16:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.678 ************************************ 00:08:03.678 END TEST bdev_verify_big_io 00:08:03.678 ************************************ 00:08:03.678 03:16:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:03.678 03:16:51 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.678 03:16:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:03.678 03:16:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.678 03:16:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.678 ************************************ 00:08:03.678 START TEST bdev_write_zeroes 00:08:03.678 ************************************ 00:08:03.678 03:16:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.678 [2024-11-21 03:16:51.136318] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:03.678 [2024-11-21 03:16:51.136423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75423 ] 00:08:03.939 [2024-11-21 03:16:51.268634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.939 [2024-11-21 03:16:51.299173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.939 [2024-11-21 03:16:51.318571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.199 Running I/O for 1 seconds... 00:08:05.585 54275.00 IOPS, 212.01 MiB/s 00:08:05.585 Latency(us) 00:08:05.585 [2024-11-21T03:16:53.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.585 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme0n1 : 1.03 7540.82 29.46 0.00 0.00 16934.88 7461.02 124215.93 00:08:05.585 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme1n1p1 : 1.03 7777.43 30.38 0.00 0.00 16396.14 8116.38 73400.32 00:08:05.585 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme1n1p2 : 1.03 7767.96 30.34 0.00 0.00 16381.58 8670.92 73400.32 00:08:05.585 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme2n1 : 1.03 7759.21 30.31 0.00 0.00 16375.62 8922.98 73400.32 00:08:05.585 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme2n2 : 1.03 7750.44 30.28 0.00 0.00 16357.27 8116.38 82676.18 00:08:05.585 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme2n3 : 1.03 7679.76 30.00 0.00 0.00 16485.25 7965.14 81466.29 00:08:05.585 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.585 Nvme3n1 : 1.03 7671.14 29.97 0.00 0.00 16480.17 6553.60 77836.60 00:08:05.585 [2024-11-21T03:16:53.150Z] =================================================================================================================== 00:08:05.585 [2024-11-21T03:16:53.150Z] Total : 53946.74 210.73 0.00 0.00 16485.25 6553.60 124215.93 00:08:05.585 00:08:05.585 real 0m1.854s 00:08:05.585 user 0m1.568s 00:08:05.585 sys 0m0.174s 00:08:05.585 03:16:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.585 ************************************ 00:08:05.585 END TEST bdev_write_zeroes 00:08:05.586 ************************************ 00:08:05.586 03:16:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:05.586 03:16:52 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.586 03:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:05.586 03:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.586 03:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:05.586 ************************************ 00:08:05.586 START TEST bdev_json_nonenclosed 00:08:05.586 ************************************ 00:08:05.586 03:16:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.586 [2024-11-21 03:16:53.057593] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:05.586 [2024-11-21 03:16:53.057705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ] 00:08:05.847 [2024-11-21 03:16:53.189224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.847 [2024-11-21 03:16:53.219700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.847 [2024-11-21 03:16:53.245143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.847 [2024-11-21 03:16:53.245226] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:05.847 [2024-11-21 03:16:53.245243] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:05.847 [2024-11-21 03:16:53.245252] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.847 00:08:05.847 real 0m0.322s 00:08:05.847 user 0m0.129s 00:08:05.847 sys 0m0.090s 00:08:05.847 ************************************ 00:08:05.847 END TEST bdev_json_nonenclosed 00:08:05.847 ************************************ 00:08:05.847 03:16:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.847 03:16:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:05.847 03:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.847 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:05.847 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.847 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:05.847 ************************************ 00:08:05.847 START TEST bdev_json_nonarray 00:08:05.847 ************************************ 00:08:05.847 03:16:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.109 [2024-11-21 03:16:53.430411] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:06.109 [2024-11-21 03:16:53.430524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75485 ] 00:08:06.109 [2024-11-21 03:16:53.562165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.109 [2024-11-21 03:16:53.594311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.109 [2024-11-21 03:16:53.613417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.109 [2024-11-21 03:16:53.613502] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:06.109 [2024-11-21 03:16:53.613520] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:06.109 [2024-11-21 03:16:53.613534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.371 00:08:06.371 real 0m0.312s 00:08:06.371 user 0m0.120s 00:08:06.371 sys 0m0.088s 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.371 ************************************ 00:08:06.371 END TEST bdev_json_nonarray 00:08:06.371 ************************************ 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:06.371 03:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:06.371 03:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:06.371 03:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:06.371 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.371 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.371 03:16:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.371 ************************************ 00:08:06.371 START TEST bdev_gpt_uuid 00:08:06.371 ************************************ 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75505 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75505 00:08:06.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75505 ']' 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:06.371 03:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:06.371 [2024-11-21 03:16:53.820206] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:06.371 [2024-11-21 03:16:53.820341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75505 ] 00:08:06.631 [2024-11-21 03:16:53.953213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.631 [2024-11-21 03:16:53.980237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.631 [2024-11-21 03:16:54.000469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.202 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.202 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:07.202 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:07.202 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.202 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 Some configs were skipped because the RPC state that can call them passed over. 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.463 03:16:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:07.463 { 00:08:07.463 "name": "Nvme1n1p1", 00:08:07.463 "aliases": [ 00:08:07.463 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:07.463 ], 00:08:07.463 "product_name": "GPT Disk", 00:08:07.463 "block_size": 4096, 00:08:07.463 "num_blocks": 655104, 00:08:07.463 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:07.463 "assigned_rate_limits": { 00:08:07.463 "rw_ios_per_sec": 0, 00:08:07.463 "rw_mbytes_per_sec": 0, 00:08:07.463 "r_mbytes_per_sec": 0, 00:08:07.463 "w_mbytes_per_sec": 0 00:08:07.463 }, 00:08:07.463 "claimed": false, 00:08:07.463 "zoned": false, 00:08:07.463 "supported_io_types": { 00:08:07.463 "read": true, 00:08:07.463 "write": true, 00:08:07.463 "unmap": true, 00:08:07.463 "flush": true, 00:08:07.463 "reset": true, 00:08:07.463 "nvme_admin": false, 00:08:07.463 "nvme_io": false, 00:08:07.463 "nvme_io_md": false, 00:08:07.463 "write_zeroes": true, 00:08:07.463 "zcopy": false, 00:08:07.463 "get_zone_info": false, 00:08:07.463 "zone_management": false, 00:08:07.463 "zone_append": false, 00:08:07.463 "compare": true, 00:08:07.463 "compare_and_write": false, 00:08:07.463 "abort": true, 00:08:07.463 "seek_hole": false, 00:08:07.463 "seek_data": false, 00:08:07.463 "copy": true, 00:08:07.463 "nvme_iov_md": false 00:08:07.463 }, 00:08:07.463 "driver_specific": { 00:08:07.464 "gpt": { 00:08:07.464 "base_bdev": "Nvme1n1", 00:08:07.464 "offset_blocks": 256, 00:08:07.464 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:07.464 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:07.464 "partition_name": "SPDK_TEST_first" 00:08:07.464 } 00:08:07.464 } 00:08:07.464 } 00:08:07.464 ]' 00:08:07.464 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:07.726 { 00:08:07.726 "name": "Nvme1n1p2", 00:08:07.726 "aliases": [ 00:08:07.726 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:07.726 ], 00:08:07.726 "product_name": "GPT Disk", 00:08:07.726 "block_size": 4096, 00:08:07.726 "num_blocks": 655103, 00:08:07.726 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:07.726 "assigned_rate_limits": { 00:08:07.726 "rw_ios_per_sec": 0, 00:08:07.726 "rw_mbytes_per_sec": 0, 00:08:07.726 "r_mbytes_per_sec": 0, 00:08:07.726 "w_mbytes_per_sec": 0 00:08:07.726 }, 00:08:07.726 "claimed": false, 00:08:07.726 "zoned": false, 00:08:07.726 "supported_io_types": { 00:08:07.726 "read": true, 00:08:07.726 "write": true, 00:08:07.726 "unmap": true, 00:08:07.726 "flush": true, 00:08:07.726 "reset": true, 00:08:07.726 "nvme_admin": false, 00:08:07.726 "nvme_io": false, 00:08:07.726 "nvme_io_md": false, 00:08:07.726 "write_zeroes": true, 00:08:07.726 "zcopy": false, 00:08:07.726 "get_zone_info": false, 00:08:07.726 "zone_management": false, 00:08:07.726 "zone_append": false, 00:08:07.726 "compare": true, 00:08:07.726 "compare_and_write": false, 00:08:07.726 "abort": true, 00:08:07.726 "seek_hole": false, 00:08:07.726 "seek_data": false, 00:08:07.726 "copy": true, 00:08:07.726 "nvme_iov_md": false 00:08:07.726 }, 00:08:07.726 "driver_specific": { 00:08:07.726 "gpt": { 00:08:07.726 "base_bdev": "Nvme1n1", 00:08:07.726 "offset_blocks": 655360, 00:08:07.726 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:07.726 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:07.726 "partition_name": "SPDK_TEST_second" 00:08:07.726 } 00:08:07.726 } 00:08:07.726 } 00:08:07.726 ]' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75505 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75505 ']' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75505 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75505 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.726 killing process with pid 75505 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75505' 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75505 00:08:07.726 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75505 00:08:08.299 00:08:08.299 real 0m1.814s 00:08:08.299 user 0m1.985s 00:08:08.299 sys 0m0.336s 00:08:08.299 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.299 ************************************ 00:08:08.299 END TEST bdev_gpt_uuid 00:08:08.299 ************************************ 00:08:08.299 03:16:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:08.299 03:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:08.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.561 Waiting for block devices as requested 00:08:08.561 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:08.823 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:08.823 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:09.084 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.382 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:14.382 03:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:14.382 03:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:14.382 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:14.382 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:14.382 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:14.382 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:14.382 03:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:14.382 00:08:14.382 real 0m47.931s 00:08:14.382 user 0m59.817s 00:08:14.382 sys 0m7.405s 00:08:14.382 03:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.382 ************************************ 00:08:14.382 END TEST blockdev_nvme_gpt 00:08:14.382 ************************************ 00:08:14.382 03:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.382 03:17:01 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:14.382 03:17:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.382 03:17:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.382 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:14.382 ************************************ 00:08:14.382 START TEST nvme 00:08:14.382 ************************************ 00:08:14.382 03:17:01 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:14.382 * Looking for test storage... 00:08:14.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:14.644 03:17:01 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.644 03:17:01 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.644 03:17:01 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.644 03:17:02 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.644 03:17:02 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.644 03:17:02 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.644 03:17:02 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.644 03:17:02 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.644 03:17:02 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:14.644 03:17:02 nvme -- scripts/common.sh@345 -- # : 1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.644 03:17:02 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.644 03:17:02 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@353 -- # local d=1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.644 03:17:02 nvme -- scripts/common.sh@355 -- # echo 1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.644 03:17:02 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@353 -- # local d=2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.644 03:17:02 nvme -- scripts/common.sh@355 -- # echo 2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.644 03:17:02 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.644 03:17:02 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.644 03:17:02 nvme -- scripts/common.sh@368 -- # return 0 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.644 --rc genhtml_branch_coverage=1 00:08:14.644 --rc genhtml_function_coverage=1 00:08:14.644 --rc genhtml_legend=1 00:08:14.644 --rc geninfo_all_blocks=1 00:08:14.644 --rc geninfo_unexecuted_blocks=1 00:08:14.644 00:08:14.644 ' 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.644 --rc genhtml_branch_coverage=1 00:08:14.644 --rc genhtml_function_coverage=1 00:08:14.644 --rc genhtml_legend=1 00:08:14.644 --rc geninfo_all_blocks=1 00:08:14.644 --rc geninfo_unexecuted_blocks=1 00:08:14.644 00:08:14.644 ' 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.644 --rc genhtml_branch_coverage=1 00:08:14.644 --rc genhtml_function_coverage=1 00:08:14.644 --rc genhtml_legend=1 00:08:14.644 --rc geninfo_all_blocks=1 00:08:14.644 --rc geninfo_unexecuted_blocks=1 00:08:14.644 00:08:14.644 ' 00:08:14.644 03:17:02 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.645 --rc genhtml_branch_coverage=1 00:08:14.645 --rc genhtml_function_coverage=1 00:08:14.645 --rc genhtml_legend=1 00:08:14.645 --rc geninfo_all_blocks=1 00:08:14.645 --rc geninfo_unexecuted_blocks=1 00:08:14.645 00:08:14.645 ' 00:08:14.645 03:17:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:15.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:15.789 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.789 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.789 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.789 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.789 03:17:03 nvme -- nvme/nvme.sh@79 -- # uname 00:08:15.789 03:17:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:15.789 03:17:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:15.789 03:17:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:15.790 Waiting for stub to ready for secondary processes... 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1075 -- # stubpid=76136 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76136 ]] 00:08:15.790 03:17:03 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:15.790 [2024-11-21 03:17:03.284986] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:15.790 [2024-11-21 03:17:03.285137] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:16.732 03:17:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:16.732 03:17:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76136 ]] 00:08:16.732 03:17:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:16.993 [2024-11-21 03:17:04.416481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.993 [2024-11-21 03:17:04.448160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.993 [2024-11-21 03:17:04.471384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.993 [2024-11-21 03:17:04.471474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.993 [2024-11-21 03:17:04.471636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.993 [2024-11-21 03:17:04.485563] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:16.993 [2024-11-21 03:17:04.485628] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:16.993 [2024-11-21 03:17:04.501777] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:16.993 [2024-11-21 03:17:04.501998] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:16.993 [2024-11-21 03:17:04.502597] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:16.993 [2024-11-21 03:17:04.502871] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:16.993 [2024-11-21 03:17:04.502937] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:16.993 [2024-11-21 03:17:04.504203] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:16.993 [2024-11-21 03:17:04.504391] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:16.993 [2024-11-21 03:17:04.504445] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:16.993 [2024-11-21 03:17:04.506804] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:16.993 [2024-11-21 03:17:04.507006] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:16.993 [2024-11-21 03:17:04.507055] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:16.993 [2024-11-21 03:17:04.507104] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:16.993 [2024-11-21 03:17:04.507186] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:17.938 done. 00:08:17.938 03:17:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:17.938 03:17:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:17.938 03:17:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:17.938 03:17:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:17.938 03:17:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.938 03:17:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 ************************************ 00:08:17.938 START TEST nvme_reset 00:08:17.938 ************************************ 00:08:17.938 03:17:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:17.938 Initializing NVMe Controllers 00:08:17.938 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:17.938 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:17.938 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:17.938 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:17.938 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:17.938 00:08:17.938 real 0m0.219s 00:08:17.938 user 0m0.070s 00:08:17.938 sys 0m0.108s 00:08:17.938 03:17:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.938 03:17:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 ************************************ 00:08:17.938 END TEST nvme_reset 00:08:17.938 ************************************ 00:08:18.199 03:17:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:18.200 03:17:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.200 03:17:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.200 03:17:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.200 ************************************ 00:08:18.200 START TEST nvme_identify 00:08:18.200 ************************************ 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:18.200 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:18.200 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:18.200 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:18.200 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:18.200 03:17:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:18.200 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:18.463 [2024-11-21 03:17:05.818156] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76169 terminated unexpected 00:08:18.463 ===================================================== 00:08:18.463 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.463 ===================================================== 00:08:18.463 Controller Capabilities/Features 00:08:18.463 ================================ 00:08:18.463 Vendor ID: 1b36 00:08:18.463 Subsystem Vendor ID: 1af4 00:08:18.463 Serial Number: 12343 00:08:18.463 Model Number: QEMU NVMe Ctrl 00:08:18.463 Firmware Version: 8.0.0 00:08:18.463 Recommended Arb Burst: 6 00:08:18.463 IEEE OUI Identifier: 00 54 52 00:08:18.463 Multi-path I/O 00:08:18.463 May have multiple subsystem ports: No 00:08:18.463 May have multiple controllers: Yes 00:08:18.463 Associated with SR-IOV VF: No 00:08:18.463 Max Data Transfer Size: 524288 00:08:18.463 Max Number of Namespaces: 256 00:08:18.463 Max Number of I/O Queues: 64 00:08:18.463 NVMe Specification Version (VS): 1.4 00:08:18.463 NVMe Specification Version (Identify): 1.4 00:08:18.463 Maximum Queue Entries: 2048 00:08:18.463 Contiguous Queues Required: Yes 00:08:18.463 Arbitration Mechanisms Supported 00:08:18.463 Weighted Round Robin: Not Supported 00:08:18.463 Vendor Specific: Not Supported 00:08:18.463 Reset Timeout: 7500 ms 00:08:18.463 Doorbell Stride: 4 bytes 00:08:18.463 NVM Subsystem Reset: Not Supported 00:08:18.463 Command Sets Supported 00:08:18.463 NVM Command Set: Supported 00:08:18.463 Boot Partition: Not Supported 00:08:18.463 Memory Page Size Minimum: 4096 bytes 00:08:18.463 Memory Page Size Maximum: 65536 bytes 00:08:18.463 Persistent Memory Region: Not Supported 00:08:18.463 Optional Asynchronous Events Supported 00:08:18.463 Namespace Attribute Notices: Supported 00:08:18.463 Firmware Activation Notices: Not Supported 00:08:18.463 ANA Change Notices: Not Supported 00:08:18.463 PLE Aggregate Log Change Notices: Not Supported 00:08:18.463 LBA Status Info Alert Notices: Not Supported 00:08:18.463 EGE Aggregate Log Change Notices: Not Supported 00:08:18.463 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.463 Zone Descriptor Change Notices: Not Supported 00:08:18.463 Discovery Log Change Notices: Not Supported 00:08:18.463 Controller Attributes 00:08:18.463 128-bit Host Identifier: Not Supported 00:08:18.463 Non-Operational Permissive Mode: Not Supported 00:08:18.464 NVM Sets: Not Supported 00:08:18.464 Read Recovery Levels: Not Supported 00:08:18.464 Endurance Groups: Supported 00:08:18.464 Predictable Latency Mode: Not Supported 00:08:18.464 Traffic Based Keep ALive: Not Supported 00:08:18.464 Namespace Granularity: Not Supported 00:08:18.464 SQ Associations: Not Supported 00:08:18.464 UUID List: Not Supported 00:08:18.464 Multi-Domain Subsystem: Not Supported 00:08:18.464 Fixed Capacity Management: Not Supported 00:08:18.464 Variable Capacity Management: Not Supported 00:08:18.464 Delete Endurance Group: Not Supported 00:08:18.464 Delete NVM Set: Not Supported 00:08:18.464 Extended LBA Formats Supported: Supported 00:08:18.464 Flexible Data Placement Supported: Supported 00:08:18.464 00:08:18.464 Controller Memory Buffer Support 00:08:18.464 ================================ 00:08:18.464 Supported: No 00:08:18.464 00:08:18.464 Persistent Memory Region Support 00:08:18.464 ================================ 00:08:18.464 Supported: No 00:08:18.464 00:08:18.464 Admin Command Set Attributes 00:08:18.464 ============================ 00:08:18.464 Security Send/Receive: Not Supported 00:08:18.464 Format NVM: Supported 00:08:18.464 Firmware Activate/Download: Not Supported 00:08:18.464 Namespace Management: Supported 00:08:18.464 Device Self-Test: Not Supported 00:08:18.464 Directives: Supported 00:08:18.464 NVMe-MI: Not Supported 00:08:18.464 Virtualization Management: Not Supported 00:08:18.464 Doorbell Buffer Config: Supported 00:08:18.464 Get LBA Status Capability: Not Supported 00:08:18.464 Command & Feature Lockdown Capability: Not Supported 00:08:18.464 Abort Command Limit: 4 00:08:18.464 Async Event Request Limit: 4 00:08:18.464 Number of Firmware Slots: N/A 00:08:18.464 Firmware Slot 1 Read-Only: N/A 00:08:18.464 Firmware Activation Without Reset: N/A 00:08:18.464 Multiple Update Detection Support: N/A 00:08:18.464 Firmware Update Granularity: No Information Provided 00:08:18.464 Per-Namespace SMART Log: Yes 00:08:18.464 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.464 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:18.464 Command Effects Log Page: Supported 00:08:18.464 Get Log Page Extended Data: Supported 00:08:18.464 Telemetry Log Pages: Not Supported 00:08:18.464 Persistent Event Log Pages: Not Supported 00:08:18.464 Supported Log Pages Log Page: May Support 00:08:18.464 Commands Supported & Effects Log Page: Not Supported 00:08:18.464 Feature Identifiers & Effects Log Page:May Support 00:08:18.464 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.464 Data Area 4 for Telemetry Log: Not Supported 00:08:18.464 Error Log Page Entries Supported: 1 00:08:18.464 Keep Alive: Not Supported 00:08:18.464 00:08:18.464 NVM Command Set Attributes 00:08:18.464 ========================== 00:08:18.464 Submission Queue Entry Size 00:08:18.464 Max: 64 00:08:18.464 Min: 64 00:08:18.464 Completion Queue Entry Size 00:08:18.464 Max: 16 00:08:18.464 Min: 16 00:08:18.464 Number of Namespaces: 256 00:08:18.464 Compare Command: Supported 00:08:18.464 Write Uncorrectable Command: Not Supported 00:08:18.464 Dataset Management Command: Supported 00:08:18.464 Write Zeroes Command: Supported 00:08:18.464 Set Features Save Field: Supported 00:08:18.464 Reservations: Not Supported 00:08:18.464 Timestamp: Supported 00:08:18.464 Copy: Supported 00:08:18.464 Volatile Write Cache: Present 00:08:18.464 Atomic Write Unit (Normal): 1 00:08:18.464 Atomic Write Unit (PFail): 1 00:08:18.464 Atomic Compare & Write Unit: 1 00:08:18.464 Fused Compare & Write: Not Supported 00:08:18.464 Scatter-Gather List 00:08:18.464 SGL Command Set: Supported 00:08:18.464 SGL Keyed: Not Supported 00:08:18.464 SGL Bit Bucket Descriptor: Not Supported 00:08:18.464 SGL Metadata Pointer: Not Supported 00:08:18.464 Oversized SGL: Not Supported 00:08:18.464 SGL Metadata Address: Not Supported 00:08:18.464 SGL Offset: Not Supported 00:08:18.464 Transport SGL Data Block: Not Supported 00:08:18.464 Replay Protected Memory Block: Not Supported 00:08:18.464 00:08:18.464 Firmware Slot Information 00:08:18.464 ========================= 00:08:18.464 Active slot: 1 00:08:18.464 Slot 1 Firmware Revision: 1.0 00:08:18.464 00:08:18.464 00:08:18.464 Commands Supported and Effects 00:08:18.464 ============================== 00:08:18.464 Admin Commands 00:08:18.464 -------------- 00:08:18.464 Delete I/O Submission Queue (00h): Supported 00:08:18.464 Create I/O Submission Queue (01h): Supported 00:08:18.464 Get Log Page (02h): Supported 00:08:18.464 Delete I/O Completion Queue (04h): Supported 00:08:18.464 Create I/O Completion Queue (05h): Supported 00:08:18.464 Identify (06h): Supported 00:08:18.464 Abort (08h): Supported 00:08:18.464 Set Features (09h): Supported 00:08:18.464 Get Features (0Ah): Supported 00:08:18.464 Asynchronous Event Request (0Ch): Supported 00:08:18.464 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.464 Directive Send (19h): Supported 00:08:18.464 Directive Receive (1Ah): Supported 00:08:18.464 Virtualization Management (1Ch): Supported 00:08:18.464 Doorbell Buffer Config (7Ch): Supported 00:08:18.464 Format NVM (80h): Supported LBA-Change 00:08:18.464 I/O Commands 00:08:18.464 ------------ 00:08:18.464 Flush (00h): Supported LBA-Change 00:08:18.464 Write (01h): Supported LBA-Change 00:08:18.464 Read (02h): Supported 00:08:18.464 Compare (05h): Supported 00:08:18.464 Write Zeroes (08h): Supported LBA-Change 00:08:18.464 Dataset Management (09h): Supported LBA-Change 00:08:18.464 Unknown (0Ch): Supported 00:08:18.464 Unknown (12h): Supported 00:08:18.464 Copy (19h): Supported LBA-Change 00:08:18.464 Unknown (1Dh): Supported LBA-Change 00:08:18.464 00:08:18.464 Error Log 00:08:18.464 ========= 00:08:18.464 00:08:18.464 Arbitration 00:08:18.464 =========== 00:08:18.464 Arbitration Burst: no limit 00:08:18.464 00:08:18.464 Power Management 00:08:18.464 ================ 00:08:18.464 Number of Power States: 1 00:08:18.464 Current Power State: Power State #0 00:08:18.464 Power State #0: 00:08:18.464 Max Power: 25.00 W 00:08:18.464 Non-Operational State: Operational 00:08:18.464 Entry Latency: 16 microseconds 00:08:18.464 Exit Latency: 4 microseconds 00:08:18.464 Relative Read Throughput: 0 00:08:18.464 Relative Read Latency: 0 00:08:18.464 Relative Write Throughput: 0 00:08:18.464 Relative Write Latency: 0 00:08:18.464 Idle Power: Not Reported 00:08:18.464 Active Power: Not Reported 00:08:18.464 Non-Operational Permissive Mode: Not Supported 00:08:18.464 00:08:18.464 Health Information 00:08:18.464 ================== 00:08:18.464 Critical Warnings: 00:08:18.464 Available Spare Space: OK 00:08:18.464 Temperature: OK 00:08:18.464 Device Reliability: OK 00:08:18.464 Read Only: No 00:08:18.464 Volatile Memory Backup: OK 00:08:18.464 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.464 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.464 Available Spare: 0% 00:08:18.464 Available Spare Threshold: 0% 00:08:18.464 Life Percentage Used: 0% 00:08:18.464 Data Units Read: 802 00:08:18.464 Data Units Written: 731 00:08:18.464 Host Read Commands: 40720 00:08:18.464 Host Write Commands: 40143 00:08:18.464 Controller Busy Time: 0 minutes 00:08:18.464 Power Cycles: 0 00:08:18.464 Power On Hours: 0 hours 00:08:18.464 Unsafe Shutdowns: 0 00:08:18.464 Unrecoverable Media Errors: 0 00:08:18.464 Lifetime Error Log Entries: 0 00:08:18.464 Warning Temperature Time: 0 minutes 00:08:18.464 Critical Temperature Time: 0 minutes 00:08:18.464 00:08:18.464 Number of Queues 00:08:18.464 ================ 00:08:18.464 Number of I/O Submission Queues: 64 00:08:18.464 Number of I/O Completion Queues: 64 00:08:18.464 00:08:18.464 ZNS Specific Controller Data 00:08:18.464 ============================ 00:08:18.464 Zone Append Size Limit: 0 00:08:18.464 00:08:18.464 00:08:18.464 Active Namespaces 00:08:18.464 ================= 00:08:18.464 Namespace ID:1 00:08:18.464 Error Recovery Timeout: Unlimited 00:08:18.464 Command Set Identifier: NVM (00h) 00:08:18.464 Deallocate: Supported 00:08:18.464 Deallocated/Unwritten Error: Supported 00:08:18.464 Deallocated Read Value: All 0x00 00:08:18.464 Deallocate in Write Zeroes: Not Supported 00:08:18.464 Deallocated Guard Field: 0xFFFF 00:08:18.464 Flush: Supported 00:08:18.464 Reservation: Not Supported 00:08:18.464 Namespace Sharing Capabilities: Multiple Controllers 00:08:18.464 Size (in LBAs): 262144 (1GiB) 00:08:18.464 Capacity (in LBAs): 262144 (1GiB) 00:08:18.464 Utilization (in LBAs): 262144 (1GiB) 00:08:18.465 Thin Provisioning: Not Supported 00:08:18.465 Per-NS Atomic Units: No 00:08:18.465 Maximum Single Source Range Length: 128 00:08:18.465 Maximum Copy Length: 128 00:08:18.465 Maximum Source Range Count: 128 00:08:18.465 NGUID/EUI64 Never Reused: No 00:08:18.465 Namespace Write Protected: No 00:08:18.465 Endurance group ID: 1 00:08:18.465 Number of LBA Formats: 8 00:08:18.465 Current LBA Format: LBA Format #04 00:08:18.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.465 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.465 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.465 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.465 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.465 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.465 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.465 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.465 00:08:18.465 Get Feature FDP: 00:08:18.465 ================ 00:08:18.465 Enabled: Yes 00:08:18.465 FDP configuration index: 0 00:08:18.465 00:08:18.465 FDP configurations log page 00:08:18.465 =========================== 00:08:18.465 Number of FDP configurations: 1 00:08:18.465 Version: 0 00:08:18.465 Size: 112 00:08:18.465 FDP Configuration Descriptor: 0 00:08:18.465 Descriptor Size: 96 00:08:18.465 Reclaim Group Identifier format: 2 00:08:18.465 FDP Volatile Write Cache: Not Present 00:08:18.465 FDP Configuration: Valid 00:08:18.465 Vendor Specific Size: 0 00:08:18.465 Number of Reclaim Groups: 2 00:08:18.465 Number of Recalim Unit Handles: 8 00:08:18.465 Max Placement Identifiers: 128 00:08:18.465 Number of Namespaces Suppprted: 256 00:08:18.465 Reclaim unit Nominal Size: 6000000 bytes 00:08:18.465 Estimated Reclaim Unit Time Limit: Not Reported 00:08:18.465 RUH Desc #000: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #001: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #002: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #003: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #004: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #005: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #006: RUH Type: Initially Isolated 00:08:18.465 RUH Desc #007: RUH Type: Initially Isolated 00:08:18.465 00:08:18.465 FDP reclaim unit handle usage log page 00:08:18.465 ==================================[2024-11-21 03:17:05.822096] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76169 terminated unexpected 00:08:18.465 ==== 00:08:18.465 Number of Reclaim Unit Handles: 8 00:08:18.465 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:18.465 RUH Usage Desc #001: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #002: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #003: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #004: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #005: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #006: RUH Attributes: Unused 00:08:18.465 RUH Usage Desc #007: RUH Attributes: Unused 00:08:18.465 00:08:18.465 FDP statistics log page 00:08:18.465 ======================= 00:08:18.465 Host bytes with metadata written: 465608704 00:08:18.465 Media bytes with metadata written: 465661952 00:08:18.465 Media bytes erased: 0 00:08:18.465 00:08:18.465 FDP events log page 00:08:18.465 =================== 00:08:18.465 Number of FDP events: 0 00:08:18.465 00:08:18.465 NVM Specific Namespace Data 00:08:18.465 =========================== 00:08:18.465 Logical Block Storage Tag Mask: 0 00:08:18.465 Protection Information Capabilities: 00:08:18.465 16b Guard Protection Information Storage Tag Support: No 00:08:18.465 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.465 Storage Tag Check Read Support: No 00:08:18.465 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.465 ===================================================== 00:08:18.465 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.465 ===================================================== 00:08:18.465 Controller Capabilities/Features 00:08:18.465 ================================ 00:08:18.465 Vendor ID: 1b36 00:08:18.465 Subsystem Vendor ID: 1af4 00:08:18.465 Serial Number: 12340 00:08:18.465 Model Number: QEMU NVMe Ctrl 00:08:18.465 Firmware Version: 8.0.0 00:08:18.465 Recommended Arb Burst: 6 00:08:18.465 IEEE OUI Identifier: 00 54 52 00:08:18.465 Multi-path I/O 00:08:18.465 May have multiple subsystem ports: No 00:08:18.465 May have multiple controllers: No 00:08:18.465 Associated with SR-IOV VF: No 00:08:18.465 Max Data Transfer Size: 524288 00:08:18.465 Max Number of Namespaces: 256 00:08:18.465 Max Number of I/O Queues: 64 00:08:18.465 NVMe Specification Version (VS): 1.4 00:08:18.465 NVMe Specification Version (Identify): 1.4 00:08:18.465 Maximum Queue Entries: 2048 00:08:18.465 Contiguous Queues Required: Yes 00:08:18.465 Arbitration Mechanisms Supported 00:08:18.465 Weighted Round Robin: Not Supported 00:08:18.465 Vendor Specific: Not Supported 00:08:18.465 Reset Timeout: 7500 ms 00:08:18.465 Doorbell Stride: 4 bytes 00:08:18.465 NVM Subsystem Reset: Not Supported 00:08:18.465 Command Sets Supported 00:08:18.465 NVM Command Set: Supported 00:08:18.465 Boot Partition: Not Supported 00:08:18.465 Memory Page Size Minimum: 4096 bytes 00:08:18.465 Memory Page Size Maximum: 65536 bytes 00:08:18.465 Persistent Memory Region: Not Supported 00:08:18.465 Optional Asynchronous Events Supported 00:08:18.465 Namespace Attribute Notices: Supported 00:08:18.465 Firmware Activation Notices: Not Supported 00:08:18.465 ANA Change Notices: Not Supported 00:08:18.465 PLE Aggregate Log Change Notices: Not Supported 00:08:18.465 LBA Status Info Alert Notices: Not Supported 00:08:18.465 EGE Aggregate Log Change Notices: Not Supported 00:08:18.465 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.465 Zone Descriptor Change Notices: Not Supported 00:08:18.465 Discovery Log Change Notices: Not Supported 00:08:18.465 Controller Attributes 00:08:18.465 128-bit Host Identifier: Not Supported 00:08:18.465 Non-Operational Permissive Mode: Not Supported 00:08:18.465 NVM Sets: Not Supported 00:08:18.465 Read Recovery Levels: Not Supported 00:08:18.465 Endurance Groups: Not Supported 00:08:18.465 Predictable Latency Mode: Not Supported 00:08:18.465 Traffic Based Keep ALive: Not Supported 00:08:18.465 Namespace Granularity: Not Supported 00:08:18.465 SQ Associations: Not Supported 00:08:18.465 UUID List: Not Supported 00:08:18.465 Multi-Domain Subsystem: Not Supported 00:08:18.465 Fixed Capacity Management: Not Supported 00:08:18.465 Variable Capacity Management: Not Supported 00:08:18.465 Delete Endurance Group: Not Supported 00:08:18.465 Delete NVM Set: Not Supported 00:08:18.465 Extended LBA Formats Supported: Supported 00:08:18.465 Flexible Data Placement Supported: Not Supported 00:08:18.465 00:08:18.465 Controller Memory Buffer Support 00:08:18.465 ================================ 00:08:18.465 Supported: No 00:08:18.465 00:08:18.465 Persistent Memory Region Support 00:08:18.465 ================================ 00:08:18.465 Supported: No 00:08:18.465 00:08:18.465 Admin Command Set Attributes 00:08:18.465 ============================ 00:08:18.465 Security Send/Receive: Not Supported 00:08:18.465 Format NVM: Supported 00:08:18.465 Firmware Activate/Download: Not Supported 00:08:18.465 Namespace Management: Supported 00:08:18.465 Device Self-Test: Not Supported 00:08:18.465 Directives: Supported 00:08:18.465 NVMe-MI: Not Supported 00:08:18.465 Virtualization Management: Not Supported 00:08:18.465 Doorbell Buffer Config: Supported 00:08:18.465 Get LBA Status Capability: Not Supported 00:08:18.465 Command & Feature Lockdown Capability: Not Supported 00:08:18.465 Abort Command Limit: 4 00:08:18.465 Async Event Request Limit: 4 00:08:18.465 Number of Firmware Slots: N/A 00:08:18.465 Firmware Slot 1 Read-Only: N/A 00:08:18.465 Firmware Activation Without Reset: N/A 00:08:18.465 Multiple Update Detection Support: N/A 00:08:18.465 Firmware Update Granularity: No Information Provided 00:08:18.465 Per-Namespace SMART Log: Yes 00:08:18.465 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.465 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:18.465 Command Effects Log Page: Supported 00:08:18.465 Get Log Page Extended Data: Supported 00:08:18.465 Telemetry Log Pages: Not Supported 00:08:18.465 Persistent Event Log Pages: Not Supported 00:08:18.465 Supported Log Pages Log Page: May Support 00:08:18.465 Commands Supported & Effects Log Page: Not Supported 00:08:18.465 Feature Identifiers & Effects Log Page:May Support 00:08:18.466 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.466 Data Area 4 for Telemetry Log: Not Supported 00:08:18.466 Error Log Page Entries Supported: 1 00:08:18.466 Keep Alive: Not Supported 00:08:18.466 00:08:18.466 NVM Command Set Attributes 00:08:18.466 ========================== 00:08:18.466 Submission Queue Entry Size 00:08:18.466 Max: 64 00:08:18.466 Min: 64 00:08:18.466 Completion Queue Entry Size 00:08:18.466 Max: 16 00:08:18.466 Min: 16 00:08:18.466 Number of Namespaces: 256 00:08:18.466 Compare Command: Supported 00:08:18.466 Write Uncorrectable Command: Not Supported 00:08:18.466 Dataset Management Command: Supported 00:08:18.466 Write Zeroes Command: Supported 00:08:18.466 Set Features Save Field: Supported 00:08:18.466 Reservations: Not Supported 00:08:18.466 Timestamp: Supported 00:08:18.466 Copy: Supported 00:08:18.466 Volatile Write Cache: Present 00:08:18.466 Atomic Write Unit (Normal): 1 00:08:18.466 Atomic Write Unit (PFail): 1 00:08:18.466 Atomic Compare & Write Unit: 1 00:08:18.466 Fused Compare & Write: Not Supported 00:08:18.466 Scatter-Gather List 00:08:18.466 SGL Command Set: Supported 00:08:18.466 SGL Keyed: Not Supported 00:08:18.466 SGL Bit Bucket Descriptor: Not Supported 00:08:18.466 SGL Metadata Pointer: Not Supported 00:08:18.466 Oversized SGL: Not Supported 00:08:18.466 SGL Metadata Address: Not Supported 00:08:18.466 SGL Offset: Not Supported 00:08:18.466 Transport SGL Data Block: Not Supported 00:08:18.466 Replay Protected Memory Block: Not Supported 00:08:18.466 00:08:18.466 Firmware Slot Information 00:08:18.466 ========================= 00:08:18.466 Active slot: 1 00:08:18.466 Slot 1 Firmware Revision: 1.0 00:08:18.466 00:08:18.466 00:08:18.466 Commands Supported and Effects 00:08:18.466 ============================== 00:08:18.466 Admin Commands 00:08:18.466 -------------- 00:08:18.466 Delete I/O Submission Queue (00h): Supported 00:08:18.466 Create I/O Submission Queue (01h): Supported 00:08:18.466 Get Log Page (02h): Supported 00:08:18.466 Delete I/O Completion Queue (04h): Supported 00:08:18.466 Create I/O Completion Queue (05h): Supported 00:08:18.466 Identify (06h): Supported 00:08:18.466 Abort (08h): Supported 00:08:18.466 Set Features (09h): Supported 00:08:18.466 Get Features (0Ah): Supported 00:08:18.466 Asynchronous Event Request (0Ch): Supported 00:08:18.466 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.466 Directive Send (19h): Supported 00:08:18.466 Directive Receive (1Ah): Supported 00:08:18.466 Virtualization Management (1Ch): Supported 00:08:18.466 Doorbell Buffer Config (7Ch): Supported 00:08:18.466 Format NVM (80h): Supported LBA-Change 00:08:18.466 I/O Commands 00:08:18.466 ------------ 00:08:18.466 Flush (00h): Supported LBA-Change 00:08:18.466 Write (01h): Supported LBA-Change 00:08:18.466 Read (02h): Supported 00:08:18.466 Compare (05h): Supported 00:08:18.466 Write Zeroes (08h): Supported LBA-Change 00:08:18.466 Dataset Management (09h): Supported LBA-Change 00:08:18.466 Unknown (0Ch): Supported 00:08:18.466 Unknown (12h): Supported 00:08:18.466 Copy (19h): Supported LBA-Change 00:08:18.466 Unknown (1Dh): Supported LBA-Change 00:08:18.466 00:08:18.466 Error Log 00:08:18.466 ========= 00:08:18.466 00:08:18.466 Arbitration 00:08:18.466 =========== 00:08:18.466 Arbitration Burst: no limit 00:08:18.466 00:08:18.466 Power Management 00:08:18.466 ================ 00:08:18.466 Number of Power States: 1 00:08:18.466 Current Power State: Power State #0 00:08:18.466 Power State #0: 00:08:18.466 Max Power: 25.00 W 00:08:18.466 Non-Operational State: Operational 00:08:18.466 Entry Latency: 16 microseconds 00:08:18.466 Exit Latency: 4 microseconds 00:08:18.466 Relative Read Throughput: 0 00:08:18.466 Relative Read Latency: 0 00:08:18.466 Relative Write Throughput: 0 00:08:18.466 Relative Write Latency: 0 00:08:18.466 Idle Power: Not Reported 00:08:18.466 Active Power: Not Reported 00:08:18.466 Non-Operational Permissive Mode: Not Supported 00:08:18.466 00:08:18.466 Health Information 00:08:18.466 ================== 00:08:18.466 Critical Warnings: 00:08:18.466 Available Spare Space: OK 00:08:18.466 Temperature: OK 00:08:18.466 Device Reliability: OK 00:08:18.466 Read Only: No 00:08:18.466 Volatile Memory Backup: OK 00:08:18.466 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.466 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.466 Available Spare: 0% 00:08:18.466 Available Spare Threshold: 0% 00:08:18.466 Life Percentage Used: 0% 00:08:18.466 Data Units Read: 725 00:08:18.466 Data Units Written: 653 00:08:18.466 Host Read Commands: 39759 00:08:18.466 Host Write Commands: 39545 00:08:18.466 Controller Busy Time: 0 minutes 00:08:18.466 Power Cycles: 0 00:08:18.466 Power On Hours: 0 hours 00:08:18.466 Unsafe Shutdowns: 0 00:08:18.466 Unrecoverable Media Errors: 0 00:08:18.466 Lifetime Error Log Entries: 0 00:08:18.466 Warning Temperature Time: 0 minutes 00:08:18.466 Critical Temperature Time: 0 minutes 00:08:18.466 00:08:18.466 Number of Queues 00:08:18.466 ================ 00:08:18.466 Number of I/O Submission Queues: 64 00:08:18.466 Number of I/O Completion Queues: 64 00:08:18.466 00:08:18.466 ZNS Specific Controller Data 00:08:18.466 ============================ 00:08:18.466 Zone Append Size Limit: 0 00:08:18.466 00:08:18.466 00:08:18.466 Active Namespaces 00:08:18.466 ================= 00:08:18.466 Namespace ID:1 00:08:18.466 Error Recovery Timeout: Unlimited 00:08:18.466 Command Set Identifier: NVM (00h) 00:08:18.466 Deallocate: Supported 00:08:18.466 Deallocated/Unwritten Error: Supported 00:08:18.466 Deallocated Read Value: All 0x00 00:08:18.466 Deallocate in Write Zeroes: Not Supported 00:08:18.466 Deallocated Guard Field: 0xFFFF 00:08:18.466 Flush: Supported 00:08:18.466 Reservation: Not Supported 00:08:18.466 Metadata Transferred as: Separate Metadata Buffer 00:08:18.466 Namespace Sharing Capabilities: Private 00:08:18.466 Size (in LBAs): 1548666 (5GiB) 00:08:18.466 Capacity (in LBAs): 1548666 (5GiB) 00:08:18.466 Utilization (in LBAs): 1548666 (5GiB) 00:08:18.466 Thin Provisioning: Not Supported 00:08:18.466 Per-NS Atomic Units: No 00:08:18.466 Maximum Single Source Range Length: 128 00:08:18.466 Maximum Copy Length: 128 00:08:18.466 Maximum Source Range Count: 128 00:08:18.466 NGUID/EUI64 Never Reused: No 00:08:18.466 Namespace Write Protected: No 00:08:18.466 Number of LBA Formats: 8 00:08:18.466 Current LBA Format: [2024-11-21 03:17:05.824276] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76169 terminated unexpected 00:08:18.466 LBA Format #07 00:08:18.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.466 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.466 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.466 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.466 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.466 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.466 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.466 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.466 00:08:18.466 NVM Specific Namespace Data 00:08:18.466 =========================== 00:08:18.466 Logical Block Storage Tag Mask: 0 00:08:18.466 Protection Information Capabilities: 00:08:18.466 16b Guard Protection Information Storage Tag Support: No 00:08:18.466 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.466 Storage Tag Check Read Support: No 00:08:18.466 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.466 ===================================================== 00:08:18.466 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.466 ===================================================== 00:08:18.466 Controller Capabilities/Features 00:08:18.466 ================================ 00:08:18.466 Vendor ID: 1b36 00:08:18.466 Subsystem Vendor ID: 1af4 00:08:18.466 Serial Number: 12341 00:08:18.466 Model Number: QEMU NVMe Ctrl 00:08:18.466 Firmware Version: 8.0.0 00:08:18.466 Recommended Arb Burst: 6 00:08:18.466 IEEE OUI Identifier: 00 54 52 00:08:18.466 Multi-path I/O 00:08:18.466 May have multiple subsystem ports: No 00:08:18.466 May have multiple controllers: No 00:08:18.466 Associated with SR-IOV VF: No 00:08:18.467 Max Data Transfer Size: 524288 00:08:18.467 Max Number of Namespaces: 256 00:08:18.467 Max Number of I/O Queues: 64 00:08:18.467 NVMe Specification Version (VS): 1.4 00:08:18.467 NVMe Specification Version (Identify): 1.4 00:08:18.467 Maximum Queue Entries: 2048 00:08:18.467 Contiguous Queues Required: Yes 00:08:18.467 Arbitration Mechanisms Supported 00:08:18.467 Weighted Round Robin: Not Supported 00:08:18.467 Vendor Specific: Not Supported 00:08:18.467 Reset Timeout: 7500 ms 00:08:18.467 Doorbell Stride: 4 bytes 00:08:18.467 NVM Subsystem Reset: Not Supported 00:08:18.467 Command Sets Supported 00:08:18.467 NVM Command Set: Supported 00:08:18.467 Boot Partition: Not Supported 00:08:18.467 Memory Page Size Minimum: 4096 bytes 00:08:18.467 Memory Page Size Maximum: 65536 bytes 00:08:18.467 Persistent Memory Region: Not Supported 00:08:18.467 Optional Asynchronous Events Supported 00:08:18.467 Namespace Attribute Notices: Supported 00:08:18.467 Firmware Activation Notices: Not Supported 00:08:18.467 ANA Change Notices: Not Supported 00:08:18.467 PLE Aggregate Log Change Notices: Not Supported 00:08:18.467 LBA Status Info Alert Notices: Not Supported 00:08:18.467 EGE Aggregate Log Change Notices: Not Supported 00:08:18.467 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.467 Zone Descriptor Change Notices: Not Supported 00:08:18.467 Discovery Log Change Notices: Not Supported 00:08:18.467 Controller Attributes 00:08:18.467 128-bit Host Identifier: Not Supported 00:08:18.467 Non-Operational Permissive Mode: Not Supported 00:08:18.467 NVM Sets: Not Supported 00:08:18.467 Read Recovery Levels: Not Supported 00:08:18.467 Endurance Groups: Not Supported 00:08:18.467 Predictable Latency Mode: Not Supported 00:08:18.467 Traffic Based Keep ALive: Not Supported 00:08:18.467 Namespace Granularity: Not Supported 00:08:18.467 SQ Associations: Not Supported 00:08:18.467 UUID List: Not Supported 00:08:18.467 Multi-Domain Subsystem: Not Supported 00:08:18.467 Fixed Capacity Management: Not Supported 00:08:18.467 Variable Capacity Management: Not Supported 00:08:18.467 Delete Endurance Group: Not Supported 00:08:18.467 Delete NVM Set: Not Supported 00:08:18.467 Extended LBA Formats Supported: Supported 00:08:18.467 Flexible Data Placement Supported: Not Supported 00:08:18.467 00:08:18.467 Controller Memory Buffer Support 00:08:18.467 ================================ 00:08:18.467 Supported: No 00:08:18.467 00:08:18.467 Persistent Memory Region Support 00:08:18.467 ================================ 00:08:18.467 Supported: No 00:08:18.467 00:08:18.467 Admin Command Set Attributes 00:08:18.467 ============================ 00:08:18.467 Security Send/Receive: Not Supported 00:08:18.467 Format NVM: Supported 00:08:18.467 Firmware Activate/Download: Not Supported 00:08:18.467 Namespace Management: Supported 00:08:18.467 Device Self-Test: Not Supported 00:08:18.467 Directives: Supported 00:08:18.467 NVMe-MI: Not Supported 00:08:18.467 Virtualization Management: Not Supported 00:08:18.467 Doorbell Buffer Config: Supported 00:08:18.467 Get LBA Status Capability: Not Supported 00:08:18.467 Command & Feature Lockdown Capability: Not Supported 00:08:18.467 Abort Command Limit: 4 00:08:18.467 Async Event Request Limit: 4 00:08:18.467 Number of Firmware Slots: N/A 00:08:18.467 Firmware Slot 1 Read-Only: N/A 00:08:18.467 Firmware Activation Without Reset: N/A 00:08:18.467 Multiple Update Detection Support: N/A 00:08:18.467 Firmware Update Granularity: No Information Provided 00:08:18.467 Per-Namespace SMART Log: Yes 00:08:18.467 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.467 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:18.467 Command Effects Log Page: Supported 00:08:18.467 Get Log Page Extended Data: Supported 00:08:18.467 Telemetry Log Pages: Not Supported 00:08:18.467 Persistent Event Log Pages: Not Supported 00:08:18.467 Supported Log Pages Log Page: May Support 00:08:18.467 Commands Supported & Effects Log Page: Not Supported 00:08:18.467 Feature Identifiers & Effects Log Page:May Support 00:08:18.467 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.467 Data Area 4 for Telemetry Log: Not Supported 00:08:18.467 Error Log Page Entries Supported: 1 00:08:18.467 Keep Alive: Not Supported 00:08:18.467 00:08:18.467 NVM Command Set Attributes 00:08:18.467 ========================== 00:08:18.467 Submission Queue Entry Size 00:08:18.467 Max: 64 00:08:18.467 Min: 64 00:08:18.467 Completion Queue Entry Size 00:08:18.467 Max: 16 00:08:18.467 Min: 16 00:08:18.467 Number of Namespaces: 256 00:08:18.467 Compare Command: Supported 00:08:18.467 Write Uncorrectable Command: Not Supported 00:08:18.467 Dataset Management Command: Supported 00:08:18.467 Write Zeroes Command: Supported 00:08:18.467 Set Features Save Field: Supported 00:08:18.467 Reservations: Not Supported 00:08:18.467 Timestamp: Supported 00:08:18.467 Copy: Supported 00:08:18.467 Volatile Write Cache: Present 00:08:18.467 Atomic Write Unit (Normal): 1 00:08:18.467 Atomic Write Unit (PFail): 1 00:08:18.467 Atomic Compare & Write Unit: 1 00:08:18.467 Fused Compare & Write: Not Supported 00:08:18.467 Scatter-Gather List 00:08:18.467 SGL Command Set: Supported 00:08:18.467 SGL Keyed: Not Supported 00:08:18.467 SGL Bit Bucket Descriptor: Not Supported 00:08:18.467 SGL Metadata Pointer: Not Supported 00:08:18.467 Oversized SGL: Not Supported 00:08:18.467 SGL Metadata Address: Not Supported 00:08:18.467 SGL Offset: Not Supported 00:08:18.467 Transport SGL Data Block: Not Supported 00:08:18.467 Replay Protected Memory Block: Not Supported 00:08:18.467 00:08:18.467 Firmware Slot Information 00:08:18.467 ========================= 00:08:18.467 Active slot: 1 00:08:18.467 Slot 1 Firmware Revision: 1.0 00:08:18.467 00:08:18.467 00:08:18.467 Commands Supported and Effects 00:08:18.467 ============================== 00:08:18.467 Admin Commands 00:08:18.467 -------------- 00:08:18.467 Delete I/O Submission Queue (00h): Supported 00:08:18.467 Create I/O Submission Queue (01h): Supported 00:08:18.467 Get Log Page (02h): Supported 00:08:18.467 Delete I/O Completion Queue (04h): Supported 00:08:18.467 Create I/O Completion Queue (05h): Supported 00:08:18.467 Identify (06h): Supported 00:08:18.467 Abort (08h): Supported 00:08:18.467 Set Features (09h): Supported 00:08:18.467 Get Features (0Ah): Supported 00:08:18.467 Asynchronous Event Request (0Ch): Supported 00:08:18.467 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.467 Directive Send (19h): Supported 00:08:18.467 Directive Receive (1Ah): Supported 00:08:18.467 Virtualization Management (1Ch): Supported 00:08:18.467 Doorbell Buffer Config (7Ch): Supported 00:08:18.467 Format NVM (80h): Supported LBA-Change 00:08:18.467 I/O Commands 00:08:18.467 ------------ 00:08:18.467 Flush (00h): Supported LBA-Change 00:08:18.467 Write (01h): Supported LBA-Change 00:08:18.467 Read (02h): Supported 00:08:18.467 Compare (05h): Supported 00:08:18.467 Write Zeroes (08h): Supported LBA-Change 00:08:18.467 Dataset Management (09h): Supported LBA-Change 00:08:18.467 Unknown (0Ch): Supported 00:08:18.467 Unknown (12h): Supported 00:08:18.467 Copy (19h): Supported LBA-Change 00:08:18.467 Unknown (1Dh): Supported LBA-Change 00:08:18.467 00:08:18.467 Error Log 00:08:18.467 ========= 00:08:18.467 00:08:18.467 Arbitration 00:08:18.467 =========== 00:08:18.467 Arbitration Burst: no limit 00:08:18.467 00:08:18.467 Power Management 00:08:18.467 ================ 00:08:18.467 Number of Power States: 1 00:08:18.467 Current Power State: Power State #0 00:08:18.467 Power State #0: 00:08:18.467 Max Power: 25.00 W 00:08:18.467 Non-Operational State: Operational 00:08:18.467 Entry Latency: 16 microseconds 00:08:18.467 Exit Latency: 4 microseconds 00:08:18.467 Relative Read Throughput: 0 00:08:18.467 Relative Read Latency: 0 00:08:18.467 Relative Write Throughput: 0 00:08:18.467 Relative Write Latency: 0 00:08:18.467 Idle Power: Not Reported 00:08:18.467 Active Power: Not Reported 00:08:18.467 Non-Operational Permissive Mode: Not Supported 00:08:18.467 00:08:18.467 Health Information 00:08:18.467 ================== 00:08:18.467 Critical Warnings: 00:08:18.467 Available Spare Space: OK 00:08:18.467 Temperature: OK 00:08:18.467 Device Reliability: OK 00:08:18.467 Read Only: No 00:08:18.467 Volatile Memory Backup: OK 00:08:18.467 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.467 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.467 Available Spare: 0% 00:08:18.467 Available Spare Threshold: 0% 00:08:18.467 Life Percentage Used: 0% 00:08:18.467 Data Units Read: 1166 00:08:18.467 Data Units Written: 1033 00:08:18.467 Host Read Commands: 58543 00:08:18.467 Host Write Commands: 57344 00:08:18.467 Controller Busy Time: 0 minutes 00:08:18.467 Power Cycles: 0 00:08:18.468 Power On Hours: 0 hours 00:08:18.468 Unsafe Shutdowns: 0 00:08:18.468 Unrecoverable Media Errors: 0 00:08:18.468 Lifetime Error Log Entries: 0 00:08:18.468 Warning Temperature Time: 0 minutes 00:08:18.468 Critical Temperature Time: 0 minutes 00:08:18.468 00:08:18.468 Number of Queues 00:08:18.468 ================ 00:08:18.468 Number of I/O Submission Queues: 64 00:08:18.468 Number of I/O Completion Queues: 64 00:08:18.468 00:08:18.468 ZNS Specific Controller Data 00:08:18.468 ============================ 00:08:18.468 Zone Append Size Limit: 0 00:08:18.468 00:08:18.468 00:08:18.468 Active Namespaces 00:08:18.468 ================= 00:08:18.468 Namespace ID:1 00:08:18.468 Error Recovery Timeout: Unlimited 00:08:18.468 Command Set Identifier: NVM (00h) 00:08:18.468 Deallocate: Supported 00:08:18.468 Deallocated/Unwritten Error: Supported 00:08:18.468 Deallocated Read Value: All 0x00 00:08:18.468 Deallocate in Write Zeroes: Not Supported 00:08:18.468 Deallocated Guard Field: 0xFFFF 00:08:18.468 Flush: Supported 00:08:18.468 Reservation: Not Supported 00:08:18.468 Namespace Sharing Capabilities: Private 00:08:18.468 Size (in LBAs): 1310720 (5GiB) 00:08:18.468 Capacity (in LBAs): 1310720 (5GiB) 00:08:18.468 Utilization (in LBAs): 1310720 (5GiB) 00:08:18.468 Thin Provisioning: Not Supported 00:08:18.468 Per-NS Atomic Units: No 00:08:18.468 Maximum Single Source Range Length: 128 00:08:18.468 Maximum Copy Length: 128 00:08:18.468 Maximum Source Range Count: 128 00:08:18.468 NGUID/EUI64 Never Reused: No 00:08:18.468 Namespace Write Protected: No 00:08:18.468 Number of LBA Formats: 8 00:08:18.468 Current LBA Format: LBA Format #04 00:08:18.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.468 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.468 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.468 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.468 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.468 LBA Form[2024-11-21 03:17:05.826652] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76169 terminated unexpected 00:08:18.468 at #05: Data Size: 4096 Metadata Size: 8 00:08:18.468 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.468 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.468 00:08:18.468 NVM Specific Namespace Data 00:08:18.468 =========================== 00:08:18.468 Logical Block Storage Tag Mask: 0 00:08:18.468 Protection Information Capabilities: 00:08:18.468 16b Guard Protection Information Storage Tag Support: No 00:08:18.468 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.468 Storage Tag Check Read Support: No 00:08:18.468 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.468 ===================================================== 00:08:18.468 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.468 ===================================================== 00:08:18.468 Controller Capabilities/Features 00:08:18.468 ================================ 00:08:18.468 Vendor ID: 1b36 00:08:18.468 Subsystem Vendor ID: 1af4 00:08:18.468 Serial Number: 12342 00:08:18.468 Model Number: QEMU NVMe Ctrl 00:08:18.468 Firmware Version: 8.0.0 00:08:18.468 Recommended Arb Burst: 6 00:08:18.468 IEEE OUI Identifier: 00 54 52 00:08:18.468 Multi-path I/O 00:08:18.468 May have multiple subsystem ports: No 00:08:18.468 May have multiple controllers: No 00:08:18.468 Associated with SR-IOV VF: No 00:08:18.468 Max Data Transfer Size: 524288 00:08:18.468 Max Number of Namespaces: 256 00:08:18.468 Max Number of I/O Queues: 64 00:08:18.468 NVMe Specification Version (VS): 1.4 00:08:18.468 NVMe Specification Version (Identify): 1.4 00:08:18.468 Maximum Queue Entries: 2048 00:08:18.468 Contiguous Queues Required: Yes 00:08:18.468 Arbitration Mechanisms Supported 00:08:18.468 Weighted Round Robin: Not Supported 00:08:18.468 Vendor Specific: Not Supported 00:08:18.468 Reset Timeout: 7500 ms 00:08:18.468 Doorbell Stride: 4 bytes 00:08:18.468 NVM Subsystem Reset: Not Supported 00:08:18.468 Command Sets Supported 00:08:18.468 NVM Command Set: Supported 00:08:18.468 Boot Partition: Not Supported 00:08:18.468 Memory Page Size Minimum: 4096 bytes 00:08:18.468 Memory Page Size Maximum: 65536 bytes 00:08:18.468 Persistent Memory Region: Not Supported 00:08:18.468 Optional Asynchronous Events Supported 00:08:18.468 Namespace Attribute Notices: Supported 00:08:18.468 Firmware Activation Notices: Not Supported 00:08:18.468 ANA Change Notices: Not Supported 00:08:18.468 PLE Aggregate Log Change Notices: Not Supported 00:08:18.468 LBA Status Info Alert Notices: Not Supported 00:08:18.468 EGE Aggregate Log Change Notices: Not Supported 00:08:18.468 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.468 Zone Descriptor Change Notices: Not Supported 00:08:18.468 Discovery Log Change Notices: Not Supported 00:08:18.468 Controller Attributes 00:08:18.468 128-bit Host Identifier: Not Supported 00:08:18.468 Non-Operational Permissive Mode: Not Supported 00:08:18.468 NVM Sets: Not Supported 00:08:18.468 Read Recovery Levels: Not Supported 00:08:18.468 Endurance Groups: Not Supported 00:08:18.468 Predictable Latency Mode: Not Supported 00:08:18.468 Traffic Based Keep ALive: Not Supported 00:08:18.468 Namespace Granularity: Not Supported 00:08:18.468 SQ Associations: Not Supported 00:08:18.468 UUID List: Not Supported 00:08:18.468 Multi-Domain Subsystem: Not Supported 00:08:18.468 Fixed Capacity Management: Not Supported 00:08:18.468 Variable Capacity Management: Not Supported 00:08:18.468 Delete Endurance Group: Not Supported 00:08:18.468 Delete NVM Set: Not Supported 00:08:18.468 Extended LBA Formats Supported: Supported 00:08:18.468 Flexible Data Placement Supported: Not Supported 00:08:18.468 00:08:18.468 Controller Memory Buffer Support 00:08:18.468 ================================ 00:08:18.468 Supported: No 00:08:18.468 00:08:18.468 Persistent Memory Region Support 00:08:18.468 ================================ 00:08:18.468 Supported: No 00:08:18.468 00:08:18.468 Admin Command Set Attributes 00:08:18.468 ============================ 00:08:18.468 Security Send/Receive: Not Supported 00:08:18.468 Format NVM: Supported 00:08:18.468 Firmware Activate/Download: Not Supported 00:08:18.468 Namespace Management: Supported 00:08:18.468 Device Self-Test: Not Supported 00:08:18.468 Directives: Supported 00:08:18.468 NVMe-MI: Not Supported 00:08:18.468 Virtualization Management: Not Supported 00:08:18.468 Doorbell Buffer Config: Supported 00:08:18.468 Get LBA Status Capability: Not Supported 00:08:18.468 Command & Feature Lockdown Capability: Not Supported 00:08:18.468 Abort Command Limit: 4 00:08:18.468 Async Event Request Limit: 4 00:08:18.468 Number of Firmware Slots: N/A 00:08:18.468 Firmware Slot 1 Read-Only: N/A 00:08:18.468 Firmware Activation Without Reset: N/A 00:08:18.468 Multiple Update Detection Support: N/A 00:08:18.468 Firmware Update Granularity: No Information Provided 00:08:18.469 Per-Namespace SMART Log: Yes 00:08:18.469 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.469 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:18.469 Command Effects Log Page: Supported 00:08:18.469 Get Log Page Extended Data: Supported 00:08:18.469 Telemetry Log Pages: Not Supported 00:08:18.469 Persistent Event Log Pages: Not Supported 00:08:18.469 Supported Log Pages Log Page: May Support 00:08:18.469 Commands Supported & Effects Log Page: Not Supported 00:08:18.469 Feature Identifiers & Effects Log Page:May Support 00:08:18.469 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.469 Data Area 4 for Telemetry Log: Not Supported 00:08:18.469 Error Log Page Entries Supported: 1 00:08:18.469 Keep Alive: Not Supported 00:08:18.469 00:08:18.469 NVM Command Set Attributes 00:08:18.469 ========================== 00:08:18.469 Submission Queue Entry Size 00:08:18.469 Max: 64 00:08:18.469 Min: 64 00:08:18.469 Completion Queue Entry Size 00:08:18.469 Max: 16 00:08:18.469 Min: 16 00:08:18.469 Number of Namespaces: 256 00:08:18.469 Compare Command: Supported 00:08:18.469 Write Uncorrectable Command: Not Supported 00:08:18.469 Dataset Management Command: Supported 00:08:18.469 Write Zeroes Command: Supported 00:08:18.469 Set Features Save Field: Supported 00:08:18.469 Reservations: Not Supported 00:08:18.469 Timestamp: Supported 00:08:18.469 Copy: Supported 00:08:18.469 Volatile Write Cache: Present 00:08:18.469 Atomic Write Unit (Normal): 1 00:08:18.469 Atomic Write Unit (PFail): 1 00:08:18.469 Atomic Compare & Write Unit: 1 00:08:18.469 Fused Compare & Write: Not Supported 00:08:18.469 Scatter-Gather List 00:08:18.469 SGL Command Set: Supported 00:08:18.469 SGL Keyed: Not Supported 00:08:18.469 SGL Bit Bucket Descriptor: Not Supported 00:08:18.469 SGL Metadata Pointer: Not Supported 00:08:18.469 Oversized SGL: Not Supported 00:08:18.469 SGL Metadata Address: Not Supported 00:08:18.469 SGL Offset: Not Supported 00:08:18.469 Transport SGL Data Block: Not Supported 00:08:18.469 Replay Protected Memory Block: Not Supported 00:08:18.469 00:08:18.469 Firmware Slot Information 00:08:18.469 ========================= 00:08:18.469 Active slot: 1 00:08:18.469 Slot 1 Firmware Revision: 1.0 00:08:18.469 00:08:18.469 00:08:18.469 Commands Supported and Effects 00:08:18.469 ============================== 00:08:18.469 Admin Commands 00:08:18.469 -------------- 00:08:18.469 Delete I/O Submission Queue (00h): Supported 00:08:18.469 Create I/O Submission Queue (01h): Supported 00:08:18.469 Get Log Page (02h): Supported 00:08:18.469 Delete I/O Completion Queue (04h): Supported 00:08:18.469 Create I/O Completion Queue (05h): Supported 00:08:18.469 Identify (06h): Supported 00:08:18.469 Abort (08h): Supported 00:08:18.469 Set Features (09h): Supported 00:08:18.469 Get Features (0Ah): Supported 00:08:18.469 Asynchronous Event Request (0Ch): Supported 00:08:18.469 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.469 Directive Send (19h): Supported 00:08:18.469 Directive Receive (1Ah): Supported 00:08:18.469 Virtualization Management (1Ch): Supported 00:08:18.469 Doorbell Buffer Config (7Ch): Supported 00:08:18.469 Format NVM (80h): Supported LBA-Change 00:08:18.469 I/O Commands 00:08:18.469 ------------ 00:08:18.469 Flush (00h): Supported LBA-Change 00:08:18.469 Write (01h): Supported LBA-Change 00:08:18.469 Read (02h): Supported 00:08:18.469 Compare (05h): Supported 00:08:18.469 Write Zeroes (08h): Supported LBA-Change 00:08:18.469 Dataset Management (09h): Supported LBA-Change 00:08:18.469 Unknown (0Ch): Supported 00:08:18.469 Unknown (12h): Supported 00:08:18.469 Copy (19h): Supported LBA-Change 00:08:18.469 Unknown (1Dh): Supported LBA-Change 00:08:18.469 00:08:18.469 Error Log 00:08:18.469 ========= 00:08:18.469 00:08:18.469 Arbitration 00:08:18.469 =========== 00:08:18.469 Arbitration Burst: no limit 00:08:18.469 00:08:18.469 Power Management 00:08:18.469 ================ 00:08:18.469 Number of Power States: 1 00:08:18.469 Current Power State: Power State #0 00:08:18.469 Power State #0: 00:08:18.469 Max Power: 25.00 W 00:08:18.469 Non-Operational State: Operational 00:08:18.469 Entry Latency: 16 microseconds 00:08:18.469 Exit Latency: 4 microseconds 00:08:18.469 Relative Read Throughput: 0 00:08:18.469 Relative Read Latency: 0 00:08:18.469 Relative Write Throughput: 0 00:08:18.469 Relative Write Latency: 0 00:08:18.469 Idle Power: Not Reported 00:08:18.469 Active Power: Not Reported 00:08:18.469 Non-Operational Permissive Mode: Not Supported 00:08:18.469 00:08:18.469 Health Information 00:08:18.469 ================== 00:08:18.469 Critical Warnings: 00:08:18.469 Available Spare Space: OK 00:08:18.469 Temperature: OK 00:08:18.469 Device Reliability: OK 00:08:18.469 Read Only: No 00:08:18.469 Volatile Memory Backup: OK 00:08:18.469 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.469 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.469 Available Spare: 0% 00:08:18.469 Available Spare Threshold: 0% 00:08:18.469 Life Percentage Used: 0% 00:08:18.469 Data Units Read: 2225 00:08:18.469 Data Units Written: 2012 00:08:18.469 Host Read Commands: 120640 00:08:18.469 Host Write Commands: 118909 00:08:18.469 Controller Busy Time: 0 minutes 00:08:18.469 Power Cycles: 0 00:08:18.469 Power On Hours: 0 hours 00:08:18.469 Unsafe Shutdowns: 0 00:08:18.469 Unrecoverable Media Errors: 0 00:08:18.469 Lifetime Error Log Entries: 0 00:08:18.469 Warning Temperature Time: 0 minutes 00:08:18.469 Critical Temperature Time: 0 minutes 00:08:18.469 00:08:18.469 Number of Queues 00:08:18.469 ================ 00:08:18.469 Number of I/O Submission Queues: 64 00:08:18.469 Number of I/O Completion Queues: 64 00:08:18.469 00:08:18.469 ZNS Specific Controller Data 00:08:18.469 ============================ 00:08:18.469 Zone Append Size Limit: 0 00:08:18.469 00:08:18.469 00:08:18.469 Active Namespaces 00:08:18.469 ================= 00:08:18.469 Namespace ID:1 00:08:18.469 Error Recovery Timeout: Unlimited 00:08:18.469 Command Set Identifier: NVM (00h) 00:08:18.469 Deallocate: Supported 00:08:18.469 Deallocated/Unwritten Error: Supported 00:08:18.469 Deallocated Read Value: All 0x00 00:08:18.469 Deallocate in Write Zeroes: Not Supported 00:08:18.469 Deallocated Guard Field: 0xFFFF 00:08:18.469 Flush: Supported 00:08:18.469 Reservation: Not Supported 00:08:18.469 Namespace Sharing Capabilities: Private 00:08:18.469 Size (in LBAs): 1048576 (4GiB) 00:08:18.469 Capacity (in LBAs): 1048576 (4GiB) 00:08:18.469 Utilization (in LBAs): 1048576 (4GiB) 00:08:18.469 Thin Provisioning: Not Supported 00:08:18.469 Per-NS Atomic Units: No 00:08:18.469 Maximum Single Source Range Length: 128 00:08:18.469 Maximum Copy Length: 128 00:08:18.469 Maximum Source Range Count: 128 00:08:18.469 NGUID/EUI64 Never Reused: No 00:08:18.469 Namespace Write Protected: No 00:08:18.469 Number of LBA Formats: 8 00:08:18.469 Current LBA Format: LBA Format #04 00:08:18.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.469 00:08:18.469 NVM Specific Namespace Data 00:08:18.469 =========================== 00:08:18.469 Logical Block Storage Tag Mask: 0 00:08:18.469 Protection Information Capabilities: 00:08:18.469 16b Guard Protection Information Storage Tag Support: No 00:08:18.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.469 Storage Tag Check Read Support: No 00:08:18.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.469 Namespace ID:2 00:08:18.469 Error Recovery Timeout: Unlimited 00:08:18.469 Command Set Identifier: NVM (00h) 00:08:18.469 Deallocate: Supported 00:08:18.469 Deallocated/Unwritten Error: Supported 00:08:18.469 Deallocated Read Value: All 0x00 00:08:18.469 Deallocate in Write Zeroes: Not Supported 00:08:18.469 Deallocated Guard Field: 0xFFFF 00:08:18.470 Flush: Supported 00:08:18.470 Reservation: Not Supported 00:08:18.470 Namespace Sharing Capabilities: Private 00:08:18.470 Size (in LBAs): 1048576 (4GiB) 00:08:18.470 Capacity (in LBAs): 1048576 (4GiB) 00:08:18.470 Utilization (in LBAs): 1048576 (4GiB) 00:08:18.470 Thin Provisioning: Not Supported 00:08:18.470 Per-NS Atomic Units: No 00:08:18.470 Maximum Single Source Range Length: 128 00:08:18.470 Maximum Copy Length: 128 00:08:18.470 Maximum Source Range Count: 128 00:08:18.470 NGUID/EUI64 Never Reused: No 00:08:18.470 Namespace Write Protected: No 00:08:18.470 Number of LBA Formats: 8 00:08:18.470 Current LBA Format: LBA Format #04 00:08:18.470 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.470 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.470 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.470 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.470 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.470 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.470 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.470 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.470 00:08:18.470 NVM Specific Namespace Data 00:08:18.470 =========================== 00:08:18.470 Logical Block Storage Tag Mask: 0 00:08:18.470 Protection Information Capabilities: 00:08:18.470 16b Guard Protection Information Storage Tag Support: No 00:08:18.470 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.470 Storage Tag Check Read Support: No 00:08:18.470 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Namespace ID:3 00:08:18.470 Error Recovery Timeout: Unlimited 00:08:18.470 Command Set Identifier: NVM (00h) 00:08:18.470 Deallocate: Supported 00:08:18.470 Deallocated/Unwritten Error: Supported 00:08:18.470 Deallocated Read Value: All 0x00 00:08:18.470 Deallocate in Write Zeroes: Not Supported 00:08:18.470 Deallocated Guard Field: 0xFFFF 00:08:18.470 Flush: Supported 00:08:18.470 Reservation: Not Supported 00:08:18.470 Namespace Sharing Capabilities: Private 00:08:18.470 Size (in LBAs): 1048576 (4GiB) 00:08:18.470 Capacity (in LBAs): 1048576 (4GiB) 00:08:18.470 Utilization (in LBAs): 1048576 (4GiB) 00:08:18.470 Thin Provisioning: Not Supported 00:08:18.470 Per-NS Atomic Units: No 00:08:18.470 Maximum Single Source Range Length: 128 00:08:18.470 Maximum Copy Length: 128 00:08:18.470 Maximum Source Range Count: 128 00:08:18.470 NGUID/EUI64 Never Reused: No 00:08:18.470 Namespace Write Protected: No 00:08:18.470 Number of LBA Formats: 8 00:08:18.470 Current LBA Format: LBA Format #04 00:08:18.470 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.470 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.470 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.470 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.470 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.470 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.470 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.470 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.470 00:08:18.470 NVM Specific Namespace Data 00:08:18.470 =========================== 00:08:18.470 Logical Block Storage Tag Mask: 0 00:08:18.470 Protection Information Capabilities: 00:08:18.470 16b Guard Protection Information Storage Tag Support: No 00:08:18.470 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.470 Storage Tag Check Read Support: No 00:08:18.470 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.470 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:18.470 03:17:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:18.733 ===================================================== 00:08:18.733 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.733 ===================================================== 00:08:18.733 Controller Capabilities/Features 00:08:18.733 ================================ 00:08:18.733 Vendor ID: 1b36 00:08:18.733 Subsystem Vendor ID: 1af4 00:08:18.733 Serial Number: 12340 00:08:18.733 Model Number: QEMU NVMe Ctrl 00:08:18.733 Firmware Version: 8.0.0 00:08:18.733 Recommended Arb Burst: 6 00:08:18.733 IEEE OUI Identifier: 00 54 52 00:08:18.733 Multi-path I/O 00:08:18.733 May have multiple subsystem ports: No 00:08:18.733 May have multiple controllers: No 00:08:18.733 Associated with SR-IOV VF: No 00:08:18.733 Max Data Transfer Size: 524288 00:08:18.733 Max Number of Namespaces: 256 00:08:18.733 Max Number of I/O Queues: 64 00:08:18.733 NVMe Specification Version (VS): 1.4 00:08:18.733 NVMe Specification Version (Identify): 1.4 00:08:18.733 Maximum Queue Entries: 2048 00:08:18.733 Contiguous Queues Required: Yes 00:08:18.733 Arbitration Mechanisms Supported 00:08:18.733 Weighted Round Robin: Not Supported 00:08:18.733 Vendor Specific: Not Supported 00:08:18.733 Reset Timeout: 7500 ms 00:08:18.733 Doorbell Stride: 4 bytes 00:08:18.733 NVM Subsystem Reset: Not Supported 00:08:18.733 Command Sets Supported 00:08:18.733 NVM Command Set: Supported 00:08:18.733 Boot Partition: Not Supported 00:08:18.733 Memory Page Size Minimum: 4096 bytes 00:08:18.733 Memory Page Size Maximum: 65536 bytes 00:08:18.733 Persistent Memory Region: Not Supported 00:08:18.733 Optional Asynchronous Events Supported 00:08:18.733 Namespace Attribute Notices: Supported 00:08:18.733 Firmware Activation Notices: Not Supported 00:08:18.733 ANA Change Notices: Not Supported 00:08:18.733 PLE Aggregate Log Change Notices: Not Supported 00:08:18.733 LBA Status Info Alert Notices: Not Supported 00:08:18.733 EGE Aggregate Log Change Notices: Not Supported 00:08:18.733 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.733 Zone Descriptor Change Notices: Not Supported 00:08:18.733 Discovery Log Change Notices: Not Supported 00:08:18.733 Controller Attributes 00:08:18.733 128-bit Host Identifier: Not Supported 00:08:18.733 Non-Operational Permissive Mode: Not Supported 00:08:18.733 NVM Sets: Not Supported 00:08:18.733 Read Recovery Levels: Not Supported 00:08:18.733 Endurance Groups: Not Supported 00:08:18.733 Predictable Latency Mode: Not Supported 00:08:18.733 Traffic Based Keep ALive: Not Supported 00:08:18.733 Namespace Granularity: Not Supported 00:08:18.733 SQ Associations: Not Supported 00:08:18.733 UUID List: Not Supported 00:08:18.733 Multi-Domain Subsystem: Not Supported 00:08:18.733 Fixed Capacity Management: Not Supported 00:08:18.733 Variable Capacity Management: Not Supported 00:08:18.733 Delete Endurance Group: Not Supported 00:08:18.733 Delete NVM Set: Not Supported 00:08:18.733 Extended LBA Formats Supported: Supported 00:08:18.733 Flexible Data Placement Supported: Not Supported 00:08:18.733 00:08:18.733 Controller Memory Buffer Support 00:08:18.733 ================================ 00:08:18.733 Supported: No 00:08:18.733 00:08:18.733 Persistent Memory Region Support 00:08:18.733 ================================ 00:08:18.733 Supported: No 00:08:18.733 00:08:18.733 Admin Command Set Attributes 00:08:18.733 ============================ 00:08:18.733 Security Send/Receive: Not Supported 00:08:18.733 Format NVM: Supported 00:08:18.733 Firmware Activate/Download: Not Supported 00:08:18.733 Namespace Management: Supported 00:08:18.733 Device Self-Test: Not Supported 00:08:18.733 Directives: Supported 00:08:18.733 NVMe-MI: Not Supported 00:08:18.733 Virtualization Management: Not Supported 00:08:18.733 Doorbell Buffer Config: Supported 00:08:18.733 Get LBA Status Capability: Not Supported 00:08:18.733 Command & Feature Lockdown Capability: Not Supported 00:08:18.733 Abort Command Limit: 4 00:08:18.733 Async Event Request Limit: 4 00:08:18.733 Number of Firmware Slots: N/A 00:08:18.733 Firmware Slot 1 Read-Only: N/A 00:08:18.733 Firmware Activation Without Reset: N/A 00:08:18.733 Multiple Update Detection Support: N/A 00:08:18.733 Firmware Update Granularity: No Information Provided 00:08:18.733 Per-Namespace SMART Log: Yes 00:08:18.734 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.734 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:18.734 Command Effects Log Page: Supported 00:08:18.734 Get Log Page Extended Data: Supported 00:08:18.734 Telemetry Log Pages: Not Supported 00:08:18.734 Persistent Event Log Pages: Not Supported 00:08:18.734 Supported Log Pages Log Page: May Support 00:08:18.734 Commands Supported & Effects Log Page: Not Supported 00:08:18.734 Feature Identifiers & Effects Log Page:May Support 00:08:18.734 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.734 Data Area 4 for Telemetry Log: Not Supported 00:08:18.734 Error Log Page Entries Supported: 1 00:08:18.734 Keep Alive: Not Supported 00:08:18.734 00:08:18.734 NVM Command Set Attributes 00:08:18.734 ========================== 00:08:18.734 Submission Queue Entry Size 00:08:18.734 Max: 64 00:08:18.734 Min: 64 00:08:18.734 Completion Queue Entry Size 00:08:18.734 Max: 16 00:08:18.734 Min: 16 00:08:18.734 Number of Namespaces: 256 00:08:18.734 Compare Command: Supported 00:08:18.734 Write Uncorrectable Command: Not Supported 00:08:18.734 Dataset Management Command: Supported 00:08:18.734 Write Zeroes Command: Supported 00:08:18.734 Set Features Save Field: Supported 00:08:18.734 Reservations: Not Supported 00:08:18.734 Timestamp: Supported 00:08:18.734 Copy: Supported 00:08:18.734 Volatile Write Cache: Present 00:08:18.734 Atomic Write Unit (Normal): 1 00:08:18.734 Atomic Write Unit (PFail): 1 00:08:18.734 Atomic Compare & Write Unit: 1 00:08:18.734 Fused Compare & Write: Not Supported 00:08:18.734 Scatter-Gather List 00:08:18.734 SGL Command Set: Supported 00:08:18.734 SGL Keyed: Not Supported 00:08:18.734 SGL Bit Bucket Descriptor: Not Supported 00:08:18.734 SGL Metadata Pointer: Not Supported 00:08:18.734 Oversized SGL: Not Supported 00:08:18.734 SGL Metadata Address: Not Supported 00:08:18.734 SGL Offset: Not Supported 00:08:18.734 Transport SGL Data Block: Not Supported 00:08:18.734 Replay Protected Memory Block: Not Supported 00:08:18.734 00:08:18.734 Firmware Slot Information 00:08:18.734 ========================= 00:08:18.734 Active slot: 1 00:08:18.734 Slot 1 Firmware Revision: 1.0 00:08:18.734 00:08:18.734 00:08:18.734 Commands Supported and Effects 00:08:18.734 ============================== 00:08:18.734 Admin Commands 00:08:18.734 -------------- 00:08:18.734 Delete I/O Submission Queue (00h): Supported 00:08:18.734 Create I/O Submission Queue (01h): Supported 00:08:18.734 Get Log Page (02h): Supported 00:08:18.734 Delete I/O Completion Queue (04h): Supported 00:08:18.734 Create I/O Completion Queue (05h): Supported 00:08:18.734 Identify (06h): Supported 00:08:18.734 Abort (08h): Supported 00:08:18.734 Set Features (09h): Supported 00:08:18.734 Get Features (0Ah): Supported 00:08:18.734 Asynchronous Event Request (0Ch): Supported 00:08:18.734 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.734 Directive Send (19h): Supported 00:08:18.734 Directive Receive (1Ah): Supported 00:08:18.734 Virtualization Management (1Ch): Supported 00:08:18.734 Doorbell Buffer Config (7Ch): Supported 00:08:18.734 Format NVM (80h): Supported LBA-Change 00:08:18.734 I/O Commands 00:08:18.734 ------------ 00:08:18.734 Flush (00h): Supported LBA-Change 00:08:18.734 Write (01h): Supported LBA-Change 00:08:18.734 Read (02h): Supported 00:08:18.734 Compare (05h): Supported 00:08:18.734 Write Zeroes (08h): Supported LBA-Change 00:08:18.734 Dataset Management (09h): Supported LBA-Change 00:08:18.734 Unknown (0Ch): Supported 00:08:18.734 Unknown (12h): Supported 00:08:18.734 Copy (19h): Supported LBA-Change 00:08:18.734 Unknown (1Dh): Supported LBA-Change 00:08:18.734 00:08:18.734 Error Log 00:08:18.734 ========= 00:08:18.734 00:08:18.734 Arbitration 00:08:18.734 =========== 00:08:18.734 Arbitration Burst: no limit 00:08:18.734 00:08:18.734 Power Management 00:08:18.734 ================ 00:08:18.734 Number of Power States: 1 00:08:18.734 Current Power State: Power State #0 00:08:18.734 Power State #0: 00:08:18.734 Max Power: 25.00 W 00:08:18.734 Non-Operational State: Operational 00:08:18.734 Entry Latency: 16 microseconds 00:08:18.734 Exit Latency: 4 microseconds 00:08:18.734 Relative Read Throughput: 0 00:08:18.734 Relative Read Latency: 0 00:08:18.734 Relative Write Throughput: 0 00:08:18.734 Relative Write Latency: 0 00:08:18.734 Idle Power: Not Reported 00:08:18.734 Active Power: Not Reported 00:08:18.734 Non-Operational Permissive Mode: Not Supported 00:08:18.734 00:08:18.734 Health Information 00:08:18.734 ================== 00:08:18.734 Critical Warnings: 00:08:18.734 Available Spare Space: OK 00:08:18.734 Temperature: OK 00:08:18.734 Device Reliability: OK 00:08:18.734 Read Only: No 00:08:18.734 Volatile Memory Backup: OK 00:08:18.734 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.734 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.734 Available Spare: 0% 00:08:18.734 Available Spare Threshold: 0% 00:08:18.734 Life Percentage Used: 0% 00:08:18.734 Data Units Read: 725 00:08:18.734 Data Units Written: 653 00:08:18.734 Host Read Commands: 39759 00:08:18.734 Host Write Commands: 39545 00:08:18.734 Controller Busy Time: 0 minutes 00:08:18.734 Power Cycles: 0 00:08:18.734 Power On Hours: 0 hours 00:08:18.734 Unsafe Shutdowns: 0 00:08:18.734 Unrecoverable Media Errors: 0 00:08:18.734 Lifetime Error Log Entries: 0 00:08:18.734 Warning Temperature Time: 0 minutes 00:08:18.734 Critical Temperature Time: 0 minutes 00:08:18.734 00:08:18.734 Number of Queues 00:08:18.734 ================ 00:08:18.734 Number of I/O Submission Queues: 64 00:08:18.734 Number of I/O Completion Queues: 64 00:08:18.734 00:08:18.734 ZNS Specific Controller Data 00:08:18.734 ============================ 00:08:18.734 Zone Append Size Limit: 0 00:08:18.734 00:08:18.734 00:08:18.734 Active Namespaces 00:08:18.734 ================= 00:08:18.734 Namespace ID:1 00:08:18.734 Error Recovery Timeout: Unlimited 00:08:18.734 Command Set Identifier: NVM (00h) 00:08:18.735 Deallocate: Supported 00:08:18.735 Deallocated/Unwritten Error: Supported 00:08:18.735 Deallocated Read Value: All 0x00 00:08:18.735 Deallocate in Write Zeroes: Not Supported 00:08:18.735 Deallocated Guard Field: 0xFFFF 00:08:18.735 Flush: Supported 00:08:18.735 Reservation: Not Supported 00:08:18.735 Metadata Transferred as: Separate Metadata Buffer 00:08:18.735 Namespace Sharing Capabilities: Private 00:08:18.735 Size (in LBAs): 1548666 (5GiB) 00:08:18.735 Capacity (in LBAs): 1548666 (5GiB) 00:08:18.735 Utilization (in LBAs): 1548666 (5GiB) 00:08:18.735 Thin Provisioning: Not Supported 00:08:18.735 Per-NS Atomic Units: No 00:08:18.735 Maximum Single Source Range Length: 128 00:08:18.735 Maximum Copy Length: 128 00:08:18.735 Maximum Source Range Count: 128 00:08:18.735 NGUID/EUI64 Never Reused: No 00:08:18.735 Namespace Write Protected: No 00:08:18.735 Number of LBA Formats: 8 00:08:18.735 Current LBA Format: LBA Format #07 00:08:18.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.735 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.735 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.735 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.735 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.735 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.735 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.735 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.735 00:08:18.735 NVM Specific Namespace Data 00:08:18.735 =========================== 00:08:18.735 Logical Block Storage Tag Mask: 0 00:08:18.735 Protection Information Capabilities: 00:08:18.735 16b Guard Protection Information Storage Tag Support: No 00:08:18.735 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.735 Storage Tag Check Read Support: No 00:08:18.735 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.735 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:18.735 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:18.998 ===================================================== 00:08:18.998 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.998 ===================================================== 00:08:18.998 Controller Capabilities/Features 00:08:18.998 ================================ 00:08:18.998 Vendor ID: 1b36 00:08:18.998 Subsystem Vendor ID: 1af4 00:08:18.998 Serial Number: 12341 00:08:18.998 Model Number: QEMU NVMe Ctrl 00:08:18.998 Firmware Version: 8.0.0 00:08:18.998 Recommended Arb Burst: 6 00:08:18.998 IEEE OUI Identifier: 00 54 52 00:08:18.998 Multi-path I/O 00:08:18.998 May have multiple subsystem ports: No 00:08:18.998 May have multiple controllers: No 00:08:18.998 Associated with SR-IOV VF: No 00:08:18.998 Max Data Transfer Size: 524288 00:08:18.998 Max Number of Namespaces: 256 00:08:18.998 Max Number of I/O Queues: 64 00:08:18.998 NVMe Specification Version (VS): 1.4 00:08:18.998 NVMe Specification Version (Identify): 1.4 00:08:18.998 Maximum Queue Entries: 2048 00:08:18.998 Contiguous Queues Required: Yes 00:08:18.998 Arbitration Mechanisms Supported 00:08:18.998 Weighted Round Robin: Not Supported 00:08:18.998 Vendor Specific: Not Supported 00:08:18.998 Reset Timeout: 7500 ms 00:08:18.998 Doorbell Stride: 4 bytes 00:08:18.998 NVM Subsystem Reset: Not Supported 00:08:18.998 Command Sets Supported 00:08:18.998 NVM Command Set: Supported 00:08:18.998 Boot Partition: Not Supported 00:08:18.998 Memory Page Size Minimum: 4096 bytes 00:08:18.998 Memory Page Size Maximum: 65536 bytes 00:08:18.998 Persistent Memory Region: Not Supported 00:08:18.998 Optional Asynchronous Events Supported 00:08:18.998 Namespace Attribute Notices: Supported 00:08:18.998 Firmware Activation Notices: Not Supported 00:08:18.998 ANA Change Notices: Not Supported 00:08:18.998 PLE Aggregate Log Change Notices: Not Supported 00:08:18.998 LBA Status Info Alert Notices: Not Supported 00:08:18.998 EGE Aggregate Log Change Notices: Not Supported 00:08:18.998 Normal NVM Subsystem Shutdown event: Not Supported 00:08:18.998 Zone Descriptor Change Notices: Not Supported 00:08:18.998 Discovery Log Change Notices: Not Supported 00:08:18.998 Controller Attributes 00:08:18.998 128-bit Host Identifier: Not Supported 00:08:18.998 Non-Operational Permissive Mode: Not Supported 00:08:18.998 NVM Sets: Not Supported 00:08:18.998 Read Recovery Levels: Not Supported 00:08:18.998 Endurance Groups: Not Supported 00:08:18.998 Predictable Latency Mode: Not Supported 00:08:18.998 Traffic Based Keep ALive: Not Supported 00:08:18.998 Namespace Granularity: Not Supported 00:08:18.998 SQ Associations: Not Supported 00:08:18.998 UUID List: Not Supported 00:08:18.998 Multi-Domain Subsystem: Not Supported 00:08:18.998 Fixed Capacity Management: Not Supported 00:08:18.998 Variable Capacity Management: Not Supported 00:08:18.998 Delete Endurance Group: Not Supported 00:08:18.998 Delete NVM Set: Not Supported 00:08:18.998 Extended LBA Formats Supported: Supported 00:08:18.998 Flexible Data Placement Supported: Not Supported 00:08:18.998 00:08:18.998 Controller Memory Buffer Support 00:08:18.998 ================================ 00:08:18.998 Supported: No 00:08:18.998 00:08:18.998 Persistent Memory Region Support 00:08:18.998 ================================ 00:08:18.998 Supported: No 00:08:18.998 00:08:18.998 Admin Command Set Attributes 00:08:18.998 ============================ 00:08:18.998 Security Send/Receive: Not Supported 00:08:18.998 Format NVM: Supported 00:08:18.998 Firmware Activate/Download: Not Supported 00:08:18.998 Namespace Management: Supported 00:08:18.998 Device Self-Test: Not Supported 00:08:18.998 Directives: Supported 00:08:18.998 NVMe-MI: Not Supported 00:08:18.998 Virtualization Management: Not Supported 00:08:18.998 Doorbell Buffer Config: Supported 00:08:18.998 Get LBA Status Capability: Not Supported 00:08:18.998 Command & Feature Lockdown Capability: Not Supported 00:08:18.998 Abort Command Limit: 4 00:08:18.998 Async Event Request Limit: 4 00:08:18.998 Number of Firmware Slots: N/A 00:08:18.998 Firmware Slot 1 Read-Only: N/A 00:08:18.999 Firmware Activation Without Reset: N/A 00:08:18.999 Multiple Update Detection Support: N/A 00:08:18.999 Firmware Update Granularity: No Information Provided 00:08:18.999 Per-Namespace SMART Log: Yes 00:08:18.999 Asymmetric Namespace Access Log Page: Not Supported 00:08:18.999 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:18.999 Command Effects Log Page: Supported 00:08:18.999 Get Log Page Extended Data: Supported 00:08:18.999 Telemetry Log Pages: Not Supported 00:08:18.999 Persistent Event Log Pages: Not Supported 00:08:18.999 Supported Log Pages Log Page: May Support 00:08:18.999 Commands Supported & Effects Log Page: Not Supported 00:08:18.999 Feature Identifiers & Effects Log Page:May Support 00:08:18.999 NVMe-MI Commands & Effects Log Page: May Support 00:08:18.999 Data Area 4 for Telemetry Log: Not Supported 00:08:18.999 Error Log Page Entries Supported: 1 00:08:18.999 Keep Alive: Not Supported 00:08:18.999 00:08:18.999 NVM Command Set Attributes 00:08:18.999 ========================== 00:08:18.999 Submission Queue Entry Size 00:08:18.999 Max: 64 00:08:18.999 Min: 64 00:08:18.999 Completion Queue Entry Size 00:08:18.999 Max: 16 00:08:18.999 Min: 16 00:08:18.999 Number of Namespaces: 256 00:08:18.999 Compare Command: Supported 00:08:18.999 Write Uncorrectable Command: Not Supported 00:08:18.999 Dataset Management Command: Supported 00:08:18.999 Write Zeroes Command: Supported 00:08:18.999 Set Features Save Field: Supported 00:08:18.999 Reservations: Not Supported 00:08:18.999 Timestamp: Supported 00:08:18.999 Copy: Supported 00:08:18.999 Volatile Write Cache: Present 00:08:18.999 Atomic Write Unit (Normal): 1 00:08:18.999 Atomic Write Unit (PFail): 1 00:08:18.999 Atomic Compare & Write Unit: 1 00:08:18.999 Fused Compare & Write: Not Supported 00:08:18.999 Scatter-Gather List 00:08:18.999 SGL Command Set: Supported 00:08:18.999 SGL Keyed: Not Supported 00:08:18.999 SGL Bit Bucket Descriptor: Not Supported 00:08:18.999 SGL Metadata Pointer: Not Supported 00:08:18.999 Oversized SGL: Not Supported 00:08:18.999 SGL Metadata Address: Not Supported 00:08:18.999 SGL Offset: Not Supported 00:08:18.999 Transport SGL Data Block: Not Supported 00:08:18.999 Replay Protected Memory Block: Not Supported 00:08:18.999 00:08:18.999 Firmware Slot Information 00:08:18.999 ========================= 00:08:18.999 Active slot: 1 00:08:18.999 Slot 1 Firmware Revision: 1.0 00:08:18.999 00:08:18.999 00:08:18.999 Commands Supported and Effects 00:08:18.999 ============================== 00:08:18.999 Admin Commands 00:08:18.999 -------------- 00:08:18.999 Delete I/O Submission Queue (00h): Supported 00:08:18.999 Create I/O Submission Queue (01h): Supported 00:08:18.999 Get Log Page (02h): Supported 00:08:18.999 Delete I/O Completion Queue (04h): Supported 00:08:18.999 Create I/O Completion Queue (05h): Supported 00:08:18.999 Identify (06h): Supported 00:08:18.999 Abort (08h): Supported 00:08:18.999 Set Features (09h): Supported 00:08:18.999 Get Features (0Ah): Supported 00:08:18.999 Asynchronous Event Request (0Ch): Supported 00:08:18.999 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:18.999 Directive Send (19h): Supported 00:08:18.999 Directive Receive (1Ah): Supported 00:08:18.999 Virtualization Management (1Ch): Supported 00:08:18.999 Doorbell Buffer Config (7Ch): Supported 00:08:18.999 Format NVM (80h): Supported LBA-Change 00:08:18.999 I/O Commands 00:08:18.999 ------------ 00:08:18.999 Flush (00h): Supported LBA-Change 00:08:18.999 Write (01h): Supported LBA-Change 00:08:18.999 Read (02h): Supported 00:08:18.999 Compare (05h): Supported 00:08:18.999 Write Zeroes (08h): Supported LBA-Change 00:08:18.999 Dataset Management (09h): Supported LBA-Change 00:08:18.999 Unknown (0Ch): Supported 00:08:18.999 Unknown (12h): Supported 00:08:18.999 Copy (19h): Supported LBA-Change 00:08:18.999 Unknown (1Dh): Supported LBA-Change 00:08:18.999 00:08:18.999 Error Log 00:08:18.999 ========= 00:08:18.999 00:08:18.999 Arbitration 00:08:18.999 =========== 00:08:18.999 Arbitration Burst: no limit 00:08:18.999 00:08:18.999 Power Management 00:08:18.999 ================ 00:08:18.999 Number of Power States: 1 00:08:18.999 Current Power State: Power State #0 00:08:18.999 Power State #0: 00:08:18.999 Max Power: 25.00 W 00:08:18.999 Non-Operational State: Operational 00:08:18.999 Entry Latency: 16 microseconds 00:08:18.999 Exit Latency: 4 microseconds 00:08:18.999 Relative Read Throughput: 0 00:08:18.999 Relative Read Latency: 0 00:08:18.999 Relative Write Throughput: 0 00:08:18.999 Relative Write Latency: 0 00:08:18.999 Idle Power: Not Reported 00:08:18.999 Active Power: Not Reported 00:08:18.999 Non-Operational Permissive Mode: Not Supported 00:08:18.999 00:08:18.999 Health Information 00:08:18.999 ================== 00:08:18.999 Critical Warnings: 00:08:18.999 Available Spare Space: OK 00:08:18.999 Temperature: OK 00:08:18.999 Device Reliability: OK 00:08:18.999 Read Only: No 00:08:18.999 Volatile Memory Backup: OK 00:08:18.999 Current Temperature: 323 Kelvin (50 Celsius) 00:08:18.999 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:18.999 Available Spare: 0% 00:08:18.999 Available Spare Threshold: 0% 00:08:18.999 Life Percentage Used: 0% 00:08:18.999 Data Units Read: 1166 00:08:18.999 Data Units Written: 1033 00:08:18.999 Host Read Commands: 58543 00:08:18.999 Host Write Commands: 57344 00:08:18.999 Controller Busy Time: 0 minutes 00:08:18.999 Power Cycles: 0 00:08:18.999 Power On Hours: 0 hours 00:08:18.999 Unsafe Shutdowns: 0 00:08:18.999 Unrecoverable Media Errors: 0 00:08:18.999 Lifetime Error Log Entries: 0 00:08:18.999 Warning Temperature Time: 0 minutes 00:08:18.999 Critical Temperature Time: 0 minutes 00:08:18.999 00:08:18.999 Number of Queues 00:08:18.999 ================ 00:08:18.999 Number of I/O Submission Queues: 64 00:08:18.999 Number of I/O Completion Queues: 64 00:08:18.999 00:08:18.999 ZNS Specific Controller Data 00:08:18.999 ============================ 00:08:18.999 Zone Append Size Limit: 0 00:08:18.999 00:08:18.999 00:08:18.999 Active Namespaces 00:08:18.999 ================= 00:08:18.999 Namespace ID:1 00:08:18.999 Error Recovery Timeout: Unlimited 00:08:18.999 Command Set Identifier: NVM (00h) 00:08:18.999 Deallocate: Supported 00:08:18.999 Deallocated/Unwritten Error: Supported 00:08:18.999 Deallocated Read Value: All 0x00 00:08:18.999 Deallocate in Write Zeroes: Not Supported 00:08:18.999 Deallocated Guard Field: 0xFFFF 00:08:18.999 Flush: Supported 00:08:18.999 Reservation: Not Supported 00:08:18.999 Namespace Sharing Capabilities: Private 00:08:18.999 Size (in LBAs): 1310720 (5GiB) 00:08:18.999 Capacity (in LBAs): 1310720 (5GiB) 00:08:18.999 Utilization (in LBAs): 1310720 (5GiB) 00:08:18.999 Thin Provisioning: Not Supported 00:08:18.999 Per-NS Atomic Units: No 00:08:18.999 Maximum Single Source Range Length: 128 00:08:18.999 Maximum Copy Length: 128 00:08:18.999 Maximum Source Range Count: 128 00:08:18.999 NGUID/EUI64 Never Reused: No 00:08:18.999 Namespace Write Protected: No 00:08:18.999 Number of LBA Formats: 8 00:08:18.999 Current LBA Format: LBA Format #04 00:08:18.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:18.999 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:18.999 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:18.999 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:18.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:18.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:18.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:18.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:18.999 00:08:18.999 NVM Specific Namespace Data 00:08:18.999 =========================== 00:08:18.999 Logical Block Storage Tag Mask: 0 00:08:18.999 Protection Information Capabilities: 00:08:18.999 16b Guard Protection Information Storage Tag Support: No 00:08:18.999 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:18.999 Storage Tag Check Read Support: No 00:08:18.999 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:18.999 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:19.000 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:19.000 ===================================================== 00:08:19.000 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.000 ===================================================== 00:08:19.000 Controller Capabilities/Features 00:08:19.000 ================================ 00:08:19.000 Vendor ID: 1b36 00:08:19.000 Subsystem Vendor ID: 1af4 00:08:19.000 Serial Number: 12342 00:08:19.000 Model Number: QEMU NVMe Ctrl 00:08:19.000 Firmware Version: 8.0.0 00:08:19.000 Recommended Arb Burst: 6 00:08:19.000 IEEE OUI Identifier: 00 54 52 00:08:19.000 Multi-path I/O 00:08:19.000 May have multiple subsystem ports: No 00:08:19.000 May have multiple controllers: No 00:08:19.000 Associated with SR-IOV VF: No 00:08:19.000 Max Data Transfer Size: 524288 00:08:19.000 Max Number of Namespaces: 256 00:08:19.000 Max Number of I/O Queues: 64 00:08:19.000 NVMe Specification Version (VS): 1.4 00:08:19.000 NVMe Specification Version (Identify): 1.4 00:08:19.000 Maximum Queue Entries: 2048 00:08:19.000 Contiguous Queues Required: Yes 00:08:19.000 Arbitration Mechanisms Supported 00:08:19.000 Weighted Round Robin: Not Supported 00:08:19.000 Vendor Specific: Not Supported 00:08:19.000 Reset Timeout: 7500 ms 00:08:19.000 Doorbell Stride: 4 bytes 00:08:19.000 NVM Subsystem Reset: Not Supported 00:08:19.000 Command Sets Supported 00:08:19.000 NVM Command Set: Supported 00:08:19.000 Boot Partition: Not Supported 00:08:19.000 Memory Page Size Minimum: 4096 bytes 00:08:19.000 Memory Page Size Maximum: 65536 bytes 00:08:19.000 Persistent Memory Region: Not Supported 00:08:19.000 Optional Asynchronous Events Supported 00:08:19.000 Namespace Attribute Notices: Supported 00:08:19.000 Firmware Activation Notices: Not Supported 00:08:19.000 ANA Change Notices: Not Supported 00:08:19.000 PLE Aggregate Log Change Notices: Not Supported 00:08:19.000 LBA Status Info Alert Notices: Not Supported 00:08:19.000 EGE Aggregate Log Change Notices: Not Supported 00:08:19.000 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.000 Zone Descriptor Change Notices: Not Supported 00:08:19.000 Discovery Log Change Notices: Not Supported 00:08:19.000 Controller Attributes 00:08:19.000 128-bit Host Identifier: Not Supported 00:08:19.000 Non-Operational Permissive Mode: Not Supported 00:08:19.000 NVM Sets: Not Supported 00:08:19.000 Read Recovery Levels: Not Supported 00:08:19.000 Endurance Groups: Not Supported 00:08:19.000 Predictable Latency Mode: Not Supported 00:08:19.000 Traffic Based Keep ALive: Not Supported 00:08:19.000 Namespace Granularity: Not Supported 00:08:19.000 SQ Associations: Not Supported 00:08:19.000 UUID List: Not Supported 00:08:19.000 Multi-Domain Subsystem: Not Supported 00:08:19.000 Fixed Capacity Management: Not Supported 00:08:19.000 Variable Capacity Management: Not Supported 00:08:19.000 Delete Endurance Group: Not Supported 00:08:19.000 Delete NVM Set: Not Supported 00:08:19.000 Extended LBA Formats Supported: Supported 00:08:19.000 Flexible Data Placement Supported: Not Supported 00:08:19.000 00:08:19.000 Controller Memory Buffer Support 00:08:19.000 ================================ 00:08:19.000 Supported: No 00:08:19.000 00:08:19.000 Persistent Memory Region Support 00:08:19.000 ================================ 00:08:19.000 Supported: No 00:08:19.000 00:08:19.000 Admin Command Set Attributes 00:08:19.000 ============================ 00:08:19.000 Security Send/Receive: Not Supported 00:08:19.000 Format NVM: Supported 00:08:19.000 Firmware Activate/Download: Not Supported 00:08:19.000 Namespace Management: Supported 00:08:19.000 Device Self-Test: Not Supported 00:08:19.000 Directives: Supported 00:08:19.000 NVMe-MI: Not Supported 00:08:19.000 Virtualization Management: Not Supported 00:08:19.000 Doorbell Buffer Config: Supported 00:08:19.000 Get LBA Status Capability: Not Supported 00:08:19.000 Command & Feature Lockdown Capability: Not Supported 00:08:19.000 Abort Command Limit: 4 00:08:19.000 Async Event Request Limit: 4 00:08:19.000 Number of Firmware Slots: N/A 00:08:19.000 Firmware Slot 1 Read-Only: N/A 00:08:19.000 Firmware Activation Without Reset: N/A 00:08:19.000 Multiple Update Detection Support: N/A 00:08:19.000 Firmware Update Granularity: No Information Provided 00:08:19.000 Per-Namespace SMART Log: Yes 00:08:19.000 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.000 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:19.000 Command Effects Log Page: Supported 00:08:19.000 Get Log Page Extended Data: Supported 00:08:19.000 Telemetry Log Pages: Not Supported 00:08:19.000 Persistent Event Log Pages: Not Supported 00:08:19.000 Supported Log Pages Log Page: May Support 00:08:19.000 Commands Supported & Effects Log Page: Not Supported 00:08:19.000 Feature Identifiers & Effects Log Page:May Support 00:08:19.000 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.000 Data Area 4 for Telemetry Log: Not Supported 00:08:19.000 Error Log Page Entries Supported: 1 00:08:19.000 Keep Alive: Not Supported 00:08:19.000 00:08:19.000 NVM Command Set Attributes 00:08:19.000 ========================== 00:08:19.000 Submission Queue Entry Size 00:08:19.000 Max: 64 00:08:19.000 Min: 64 00:08:19.000 Completion Queue Entry Size 00:08:19.000 Max: 16 00:08:19.000 Min: 16 00:08:19.000 Number of Namespaces: 256 00:08:19.000 Compare Command: Supported 00:08:19.000 Write Uncorrectable Command: Not Supported 00:08:19.000 Dataset Management Command: Supported 00:08:19.000 Write Zeroes Command: Supported 00:08:19.000 Set Features Save Field: Supported 00:08:19.000 Reservations: Not Supported 00:08:19.000 Timestamp: Supported 00:08:19.000 Copy: Supported 00:08:19.000 Volatile Write Cache: Present 00:08:19.000 Atomic Write Unit (Normal): 1 00:08:19.000 Atomic Write Unit (PFail): 1 00:08:19.000 Atomic Compare & Write Unit: 1 00:08:19.000 Fused Compare & Write: Not Supported 00:08:19.000 Scatter-Gather List 00:08:19.000 SGL Command Set: Supported 00:08:19.000 SGL Keyed: Not Supported 00:08:19.000 SGL Bit Bucket Descriptor: Not Supported 00:08:19.000 SGL Metadata Pointer: Not Supported 00:08:19.000 Oversized SGL: Not Supported 00:08:19.000 SGL Metadata Address: Not Supported 00:08:19.000 SGL Offset: Not Supported 00:08:19.000 Transport SGL Data Block: Not Supported 00:08:19.000 Replay Protected Memory Block: Not Supported 00:08:19.000 00:08:19.000 Firmware Slot Information 00:08:19.000 ========================= 00:08:19.000 Active slot: 1 00:08:19.000 Slot 1 Firmware Revision: 1.0 00:08:19.000 00:08:19.000 00:08:19.000 Commands Supported and Effects 00:08:19.000 ============================== 00:08:19.000 Admin Commands 00:08:19.000 -------------- 00:08:19.000 Delete I/O Submission Queue (00h): Supported 00:08:19.000 Create I/O Submission Queue (01h): Supported 00:08:19.000 Get Log Page (02h): Supported 00:08:19.000 Delete I/O Completion Queue (04h): Supported 00:08:19.000 Create I/O Completion Queue (05h): Supported 00:08:19.000 Identify (06h): Supported 00:08:19.000 Abort (08h): Supported 00:08:19.000 Set Features (09h): Supported 00:08:19.000 Get Features (0Ah): Supported 00:08:19.000 Asynchronous Event Request (0Ch): Supported 00:08:19.000 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.000 Directive Send (19h): Supported 00:08:19.000 Directive Receive (1Ah): Supported 00:08:19.000 Virtualization Management (1Ch): Supported 00:08:19.000 Doorbell Buffer Config (7Ch): Supported 00:08:19.000 Format NVM (80h): Supported LBA-Change 00:08:19.000 I/O Commands 00:08:19.000 ------------ 00:08:19.000 Flush (00h): Supported LBA-Change 00:08:19.000 Write (01h): Supported LBA-Change 00:08:19.000 Read (02h): Supported 00:08:19.000 Compare (05h): Supported 00:08:19.000 Write Zeroes (08h): Supported LBA-Change 00:08:19.000 Dataset Management (09h): Supported LBA-Change 00:08:19.000 Unknown (0Ch): Supported 00:08:19.000 Unknown (12h): Supported 00:08:19.000 Copy (19h): Supported LBA-Change 00:08:19.000 Unknown (1Dh): Supported LBA-Change 00:08:19.000 00:08:19.000 Error Log 00:08:19.000 ========= 00:08:19.000 00:08:19.000 Arbitration 00:08:19.000 =========== 00:08:19.000 Arbitration Burst: no limit 00:08:19.000 00:08:19.000 Power Management 00:08:19.000 ================ 00:08:19.000 Number of Power States: 1 00:08:19.000 Current Power State: Power State #0 00:08:19.000 Power State #0: 00:08:19.000 Max Power: 25.00 W 00:08:19.000 Non-Operational State: Operational 00:08:19.000 Entry Latency: 16 microseconds 00:08:19.000 Exit Latency: 4 microseconds 00:08:19.000 Relative Read Throughput: 0 00:08:19.000 Relative Read Latency: 0 00:08:19.000 Relative Write Throughput: 0 00:08:19.000 Relative Write Latency: 0 00:08:19.001 Idle Power: Not Reported 00:08:19.001 Active Power: Not Reported 00:08:19.001 Non-Operational Permissive Mode: Not Supported 00:08:19.001 00:08:19.001 Health Information 00:08:19.001 ================== 00:08:19.001 Critical Warnings: 00:08:19.001 Available Spare Space: OK 00:08:19.001 Temperature: OK 00:08:19.001 Device Reliability: OK 00:08:19.001 Read Only: No 00:08:19.001 Volatile Memory Backup: OK 00:08:19.001 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.001 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.001 Available Spare: 0% 00:08:19.001 Available Spare Threshold: 0% 00:08:19.001 Life Percentage Used: 0% 00:08:19.001 Data Units Read: 2225 00:08:19.001 Data Units Written: 2012 00:08:19.001 Host Read Commands: 120640 00:08:19.001 Host Write Commands: 118909 00:08:19.001 Controller Busy Time: 0 minutes 00:08:19.001 Power Cycles: 0 00:08:19.001 Power On Hours: 0 hours 00:08:19.001 Unsafe Shutdowns: 0 00:08:19.001 Unrecoverable Media Errors: 0 00:08:19.001 Lifetime Error Log Entries: 0 00:08:19.001 Warning Temperature Time: 0 minutes 00:08:19.001 Critical Temperature Time: 0 minutes 00:08:19.001 00:08:19.001 Number of Queues 00:08:19.001 ================ 00:08:19.001 Number of I/O Submission Queues: 64 00:08:19.001 Number of I/O Completion Queues: 64 00:08:19.001 00:08:19.001 ZNS Specific Controller Data 00:08:19.001 ============================ 00:08:19.001 Zone Append Size Limit: 0 00:08:19.001 00:08:19.001 00:08:19.001 Active Namespaces 00:08:19.001 ================= 00:08:19.001 Namespace ID:1 00:08:19.001 Error Recovery Timeout: Unlimited 00:08:19.001 Command Set Identifier: NVM (00h) 00:08:19.001 Deallocate: Supported 00:08:19.001 Deallocated/Unwritten Error: Supported 00:08:19.001 Deallocated Read Value: All 0x00 00:08:19.001 Deallocate in Write Zeroes: Not Supported 00:08:19.001 Deallocated Guard Field: 0xFFFF 00:08:19.001 Flush: Supported 00:08:19.001 Reservation: Not Supported 00:08:19.001 Namespace Sharing Capabilities: Private 00:08:19.001 Size (in LBAs): 1048576 (4GiB) 00:08:19.001 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.001 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.001 Thin Provisioning: Not Supported 00:08:19.001 Per-NS Atomic Units: No 00:08:19.001 Maximum Single Source Range Length: 128 00:08:19.001 Maximum Copy Length: 128 00:08:19.001 Maximum Source Range Count: 128 00:08:19.001 NGUID/EUI64 Never Reused: No 00:08:19.001 Namespace Write Protected: No 00:08:19.001 Number of LBA Formats: 8 00:08:19.001 Current LBA Format: LBA Format #04 00:08:19.001 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.001 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.001 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.001 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.001 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.001 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.001 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.001 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.001 00:08:19.001 NVM Specific Namespace Data 00:08:19.001 =========================== 00:08:19.001 Logical Block Storage Tag Mask: 0 00:08:19.001 Protection Information Capabilities: 00:08:19.001 16b Guard Protection Information Storage Tag Support: No 00:08:19.001 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.001 Storage Tag Check Read Support: No 00:08:19.001 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Namespace ID:2 00:08:19.001 Error Recovery Timeout: Unlimited 00:08:19.001 Command Set Identifier: NVM (00h) 00:08:19.001 Deallocate: Supported 00:08:19.001 Deallocated/Unwritten Error: Supported 00:08:19.001 Deallocated Read Value: All 0x00 00:08:19.001 Deallocate in Write Zeroes: Not Supported 00:08:19.001 Deallocated Guard Field: 0xFFFF 00:08:19.001 Flush: Supported 00:08:19.001 Reservation: Not Supported 00:08:19.001 Namespace Sharing Capabilities: Private 00:08:19.001 Size (in LBAs): 1048576 (4GiB) 00:08:19.001 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.001 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.001 Thin Provisioning: Not Supported 00:08:19.001 Per-NS Atomic Units: No 00:08:19.001 Maximum Single Source Range Length: 128 00:08:19.001 Maximum Copy Length: 128 00:08:19.001 Maximum Source Range Count: 128 00:08:19.001 NGUID/EUI64 Never Reused: No 00:08:19.001 Namespace Write Protected: No 00:08:19.001 Number of LBA Formats: 8 00:08:19.001 Current LBA Format: LBA Format #04 00:08:19.001 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.001 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.001 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.001 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.001 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.001 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.001 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.001 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.001 00:08:19.001 NVM Specific Namespace Data 00:08:19.001 =========================== 00:08:19.001 Logical Block Storage Tag Mask: 0 00:08:19.001 Protection Information Capabilities: 00:08:19.001 16b Guard Protection Information Storage Tag Support: No 00:08:19.001 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.001 Storage Tag Check Read Support: No 00:08:19.001 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.001 Namespace ID:3 00:08:19.001 Error Recovery Timeout: Unlimited 00:08:19.001 Command Set Identifier: NVM (00h) 00:08:19.001 Deallocate: Supported 00:08:19.001 Deallocated/Unwritten Error: Supported 00:08:19.001 Deallocated Read Value: All 0x00 00:08:19.001 Deallocate in Write Zeroes: Not Supported 00:08:19.001 Deallocated Guard Field: 0xFFFF 00:08:19.001 Flush: Supported 00:08:19.001 Reservation: Not Supported 00:08:19.001 Namespace Sharing Capabilities: Private 00:08:19.001 Size (in LBAs): 1048576 (4GiB) 00:08:19.001 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.001 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.001 Thin Provisioning: Not Supported 00:08:19.001 Per-NS Atomic Units: No 00:08:19.001 Maximum Single Source Range Length: 128 00:08:19.001 Maximum Copy Length: 128 00:08:19.001 Maximum Source Range Count: 128 00:08:19.001 NGUID/EUI64 Never Reused: No 00:08:19.001 Namespace Write Protected: No 00:08:19.001 Number of LBA Formats: 8 00:08:19.001 Current LBA Format: LBA Format #04 00:08:19.001 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.001 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.001 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.001 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.001 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.001 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.001 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.001 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.001 00:08:19.001 NVM Specific Namespace Data 00:08:19.001 =========================== 00:08:19.001 Logical Block Storage Tag Mask: 0 00:08:19.001 Protection Information Capabilities: 00:08:19.001 16b Guard Protection Information Storage Tag Support: No 00:08:19.001 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.263 Storage Tag Check Read Support: No 00:08:19.263 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.263 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:19.263 03:17:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:19.263 ===================================================== 00:08:19.263 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.263 ===================================================== 00:08:19.263 Controller Capabilities/Features 00:08:19.263 ================================ 00:08:19.263 Vendor ID: 1b36 00:08:19.263 Subsystem Vendor ID: 1af4 00:08:19.263 Serial Number: 12343 00:08:19.263 Model Number: QEMU NVMe Ctrl 00:08:19.263 Firmware Version: 8.0.0 00:08:19.263 Recommended Arb Burst: 6 00:08:19.263 IEEE OUI Identifier: 00 54 52 00:08:19.263 Multi-path I/O 00:08:19.263 May have multiple subsystem ports: No 00:08:19.263 May have multiple controllers: Yes 00:08:19.263 Associated with SR-IOV VF: No 00:08:19.263 Max Data Transfer Size: 524288 00:08:19.263 Max Number of Namespaces: 256 00:08:19.263 Max Number of I/O Queues: 64 00:08:19.263 NVMe Specification Version (VS): 1.4 00:08:19.263 NVMe Specification Version (Identify): 1.4 00:08:19.263 Maximum Queue Entries: 2048 00:08:19.263 Contiguous Queues Required: Yes 00:08:19.263 Arbitration Mechanisms Supported 00:08:19.264 Weighted Round Robin: Not Supported 00:08:19.264 Vendor Specific: Not Supported 00:08:19.264 Reset Timeout: 7500 ms 00:08:19.264 Doorbell Stride: 4 bytes 00:08:19.264 NVM Subsystem Reset: Not Supported 00:08:19.264 Command Sets Supported 00:08:19.264 NVM Command Set: Supported 00:08:19.264 Boot Partition: Not Supported 00:08:19.264 Memory Page Size Minimum: 4096 bytes 00:08:19.264 Memory Page Size Maximum: 65536 bytes 00:08:19.264 Persistent Memory Region: Not Supported 00:08:19.264 Optional Asynchronous Events Supported 00:08:19.264 Namespace Attribute Notices: Supported 00:08:19.264 Firmware Activation Notices: Not Supported 00:08:19.264 ANA Change Notices: Not Supported 00:08:19.264 PLE Aggregate Log Change Notices: Not Supported 00:08:19.264 LBA Status Info Alert Notices: Not Supported 00:08:19.264 EGE Aggregate Log Change Notices: Not Supported 00:08:19.264 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.264 Zone Descriptor Change Notices: Not Supported 00:08:19.264 Discovery Log Change Notices: Not Supported 00:08:19.264 Controller Attributes 00:08:19.264 128-bit Host Identifier: Not Supported 00:08:19.264 Non-Operational Permissive Mode: Not Supported 00:08:19.264 NVM Sets: Not Supported 00:08:19.264 Read Recovery Levels: Not Supported 00:08:19.264 Endurance Groups: Supported 00:08:19.264 Predictable Latency Mode: Not Supported 00:08:19.264 Traffic Based Keep ALive: Not Supported 00:08:19.264 Namespace Granularity: Not Supported 00:08:19.264 SQ Associations: Not Supported 00:08:19.264 UUID List: Not Supported 00:08:19.264 Multi-Domain Subsystem: Not Supported 00:08:19.264 Fixed Capacity Management: Not Supported 00:08:19.264 Variable Capacity Management: Not Supported 00:08:19.264 Delete Endurance Group: Not Supported 00:08:19.264 Delete NVM Set: Not Supported 00:08:19.264 Extended LBA Formats Supported: Supported 00:08:19.264 Flexible Data Placement Supported: Supported 00:08:19.264 00:08:19.264 Controller Memory Buffer Support 00:08:19.264 ================================ 00:08:19.264 Supported: No 00:08:19.264 00:08:19.264 Persistent Memory Region Support 00:08:19.264 ================================ 00:08:19.264 Supported: No 00:08:19.264 00:08:19.264 Admin Command Set Attributes 00:08:19.264 ============================ 00:08:19.264 Security Send/Receive: Not Supported 00:08:19.264 Format NVM: Supported 00:08:19.264 Firmware Activate/Download: Not Supported 00:08:19.264 Namespace Management: Supported 00:08:19.264 Device Self-Test: Not Supported 00:08:19.264 Directives: Supported 00:08:19.264 NVMe-MI: Not Supported 00:08:19.264 Virtualization Management: Not Supported 00:08:19.264 Doorbell Buffer Config: Supported 00:08:19.264 Get LBA Status Capability: Not Supported 00:08:19.264 Command & Feature Lockdown Capability: Not Supported 00:08:19.264 Abort Command Limit: 4 00:08:19.264 Async Event Request Limit: 4 00:08:19.264 Number of Firmware Slots: N/A 00:08:19.264 Firmware Slot 1 Read-Only: N/A 00:08:19.264 Firmware Activation Without Reset: N/A 00:08:19.264 Multiple Update Detection Support: N/A 00:08:19.264 Firmware Update Granularity: No Information Provided 00:08:19.264 Per-Namespace SMART Log: Yes 00:08:19.264 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.264 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:19.264 Command Effects Log Page: Supported 00:08:19.264 Get Log Page Extended Data: Supported 00:08:19.264 Telemetry Log Pages: Not Supported 00:08:19.264 Persistent Event Log Pages: Not Supported 00:08:19.264 Supported Log Pages Log Page: May Support 00:08:19.264 Commands Supported & Effects Log Page: Not Supported 00:08:19.264 Feature Identifiers & Effects Log Page:May Support 00:08:19.264 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.264 Data Area 4 for Telemetry Log: Not Supported 00:08:19.264 Error Log Page Entries Supported: 1 00:08:19.264 Keep Alive: Not Supported 00:08:19.264 00:08:19.264 NVM Command Set Attributes 00:08:19.264 ========================== 00:08:19.264 Submission Queue Entry Size 00:08:19.264 Max: 64 00:08:19.264 Min: 64 00:08:19.264 Completion Queue Entry Size 00:08:19.264 Max: 16 00:08:19.264 Min: 16 00:08:19.264 Number of Namespaces: 256 00:08:19.264 Compare Command: Supported 00:08:19.264 Write Uncorrectable Command: Not Supported 00:08:19.264 Dataset Management Command: Supported 00:08:19.264 Write Zeroes Command: Supported 00:08:19.264 Set Features Save Field: Supported 00:08:19.264 Reservations: Not Supported 00:08:19.264 Timestamp: Supported 00:08:19.264 Copy: Supported 00:08:19.264 Volatile Write Cache: Present 00:08:19.264 Atomic Write Unit (Normal): 1 00:08:19.264 Atomic Write Unit (PFail): 1 00:08:19.264 Atomic Compare & Write Unit: 1 00:08:19.264 Fused Compare & Write: Not Supported 00:08:19.264 Scatter-Gather List 00:08:19.264 SGL Command Set: Supported 00:08:19.264 SGL Keyed: Not Supported 00:08:19.264 SGL Bit Bucket Descriptor: Not Supported 00:08:19.264 SGL Metadata Pointer: Not Supported 00:08:19.264 Oversized SGL: Not Supported 00:08:19.264 SGL Metadata Address: Not Supported 00:08:19.264 SGL Offset: Not Supported 00:08:19.264 Transport SGL Data Block: Not Supported 00:08:19.264 Replay Protected Memory Block: Not Supported 00:08:19.264 00:08:19.264 Firmware Slot Information 00:08:19.264 ========================= 00:08:19.264 Active slot: 1 00:08:19.264 Slot 1 Firmware Revision: 1.0 00:08:19.264 00:08:19.264 00:08:19.264 Commands Supported and Effects 00:08:19.264 ============================== 00:08:19.264 Admin Commands 00:08:19.264 -------------- 00:08:19.264 Delete I/O Submission Queue (00h): Supported 00:08:19.264 Create I/O Submission Queue (01h): Supported 00:08:19.264 Get Log Page (02h): Supported 00:08:19.264 Delete I/O Completion Queue (04h): Supported 00:08:19.264 Create I/O Completion Queue (05h): Supported 00:08:19.264 Identify (06h): Supported 00:08:19.264 Abort (08h): Supported 00:08:19.264 Set Features (09h): Supported 00:08:19.264 Get Features (0Ah): Supported 00:08:19.264 Asynchronous Event Request (0Ch): Supported 00:08:19.264 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.264 Directive Send (19h): Supported 00:08:19.264 Directive Receive (1Ah): Supported 00:08:19.264 Virtualization Management (1Ch): Supported 00:08:19.264 Doorbell Buffer Config (7Ch): Supported 00:08:19.264 Format NVM (80h): Supported LBA-Change 00:08:19.264 I/O Commands 00:08:19.264 ------------ 00:08:19.264 Flush (00h): Supported LBA-Change 00:08:19.264 Write (01h): Supported LBA-Change 00:08:19.264 Read (02h): Supported 00:08:19.264 Compare (05h): Supported 00:08:19.264 Write Zeroes (08h): Supported LBA-Change 00:08:19.264 Dataset Management (09h): Supported LBA-Change 00:08:19.264 Unknown (0Ch): Supported 00:08:19.264 Unknown (12h): Supported 00:08:19.264 Copy (19h): Supported LBA-Change 00:08:19.264 Unknown (1Dh): Supported LBA-Change 00:08:19.264 00:08:19.264 Error Log 00:08:19.264 ========= 00:08:19.264 00:08:19.264 Arbitration 00:08:19.264 =========== 00:08:19.264 Arbitration Burst: no limit 00:08:19.264 00:08:19.264 Power Management 00:08:19.264 ================ 00:08:19.264 Number of Power States: 1 00:08:19.264 Current Power State: Power State #0 00:08:19.264 Power State #0: 00:08:19.264 Max Power: 25.00 W 00:08:19.264 Non-Operational State: Operational 00:08:19.264 Entry Latency: 16 microseconds 00:08:19.264 Exit Latency: 4 microseconds 00:08:19.264 Relative Read Throughput: 0 00:08:19.264 Relative Read Latency: 0 00:08:19.264 Relative Write Throughput: 0 00:08:19.264 Relative Write Latency: 0 00:08:19.264 Idle Power: Not Reported 00:08:19.264 Active Power: Not Reported 00:08:19.264 Non-Operational Permissive Mode: Not Supported 00:08:19.264 00:08:19.264 Health Information 00:08:19.264 ================== 00:08:19.264 Critical Warnings: 00:08:19.264 Available Spare Space: OK 00:08:19.264 Temperature: OK 00:08:19.264 Device Reliability: OK 00:08:19.264 Read Only: No 00:08:19.264 Volatile Memory Backup: OK 00:08:19.264 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.264 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.264 Available Spare: 0% 00:08:19.264 Available Spare Threshold: 0% 00:08:19.264 Life Percentage Used: 0% 00:08:19.264 Data Units Read: 802 00:08:19.264 Data Units Written: 731 00:08:19.265 Host Read Commands: 40720 00:08:19.265 Host Write Commands: 40143 00:08:19.265 Controller Busy Time: 0 minutes 00:08:19.265 Power Cycles: 0 00:08:19.265 Power On Hours: 0 hours 00:08:19.265 Unsafe Shutdowns: 0 00:08:19.265 Unrecoverable Media Errors: 0 00:08:19.265 Lifetime Error Log Entries: 0 00:08:19.265 Warning Temperature Time: 0 minutes 00:08:19.265 Critical Temperature Time: 0 minutes 00:08:19.265 00:08:19.265 Number of Queues 00:08:19.265 ================ 00:08:19.265 Number of I/O Submission Queues: 64 00:08:19.265 Number of I/O Completion Queues: 64 00:08:19.265 00:08:19.265 ZNS Specific Controller Data 00:08:19.265 ============================ 00:08:19.265 Zone Append Size Limit: 0 00:08:19.265 00:08:19.265 00:08:19.265 Active Namespaces 00:08:19.265 ================= 00:08:19.265 Namespace ID:1 00:08:19.265 Error Recovery Timeout: Unlimited 00:08:19.265 Command Set Identifier: NVM (00h) 00:08:19.265 Deallocate: Supported 00:08:19.265 Deallocated/Unwritten Error: Supported 00:08:19.265 Deallocated Read Value: All 0x00 00:08:19.265 Deallocate in Write Zeroes: Not Supported 00:08:19.265 Deallocated Guard Field: 0xFFFF 00:08:19.265 Flush: Supported 00:08:19.265 Reservation: Not Supported 00:08:19.265 Namespace Sharing Capabilities: Multiple Controllers 00:08:19.265 Size (in LBAs): 262144 (1GiB) 00:08:19.265 Capacity (in LBAs): 262144 (1GiB) 00:08:19.265 Utilization (in LBAs): 262144 (1GiB) 00:08:19.265 Thin Provisioning: Not Supported 00:08:19.265 Per-NS Atomic Units: No 00:08:19.265 Maximum Single Source Range Length: 128 00:08:19.265 Maximum Copy Length: 128 00:08:19.265 Maximum Source Range Count: 128 00:08:19.265 NGUID/EUI64 Never Reused: No 00:08:19.265 Namespace Write Protected: No 00:08:19.265 Endurance group ID: 1 00:08:19.265 Number of LBA Formats: 8 00:08:19.265 Current LBA Format: LBA Format #04 00:08:19.265 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.265 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.265 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.265 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.265 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.265 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.265 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.265 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.265 00:08:19.265 Get Feature FDP: 00:08:19.265 ================ 00:08:19.265 Enabled: Yes 00:08:19.265 FDP configuration index: 0 00:08:19.265 00:08:19.265 FDP configurations log page 00:08:19.265 =========================== 00:08:19.265 Number of FDP configurations: 1 00:08:19.265 Version: 0 00:08:19.265 Size: 112 00:08:19.265 FDP Configuration Descriptor: 0 00:08:19.265 Descriptor Size: 96 00:08:19.265 Reclaim Group Identifier format: 2 00:08:19.265 FDP Volatile Write Cache: Not Present 00:08:19.265 FDP Configuration: Valid 00:08:19.265 Vendor Specific Size: 0 00:08:19.265 Number of Reclaim Groups: 2 00:08:19.265 Number of Recalim Unit Handles: 8 00:08:19.265 Max Placement Identifiers: 128 00:08:19.265 Number of Namespaces Suppprted: 256 00:08:19.265 Reclaim unit Nominal Size: 6000000 bytes 00:08:19.265 Estimated Reclaim Unit Time Limit: Not Reported 00:08:19.265 RUH Desc #000: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #001: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #002: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #003: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #004: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #005: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #006: RUH Type: Initially Isolated 00:08:19.265 RUH Desc #007: RUH Type: Initially Isolated 00:08:19.265 00:08:19.265 FDP reclaim unit handle usage log page 00:08:19.265 ====================================== 00:08:19.265 Number of Reclaim Unit Handles: 8 00:08:19.265 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:19.265 RUH Usage Desc #001: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #002: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #003: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #004: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #005: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #006: RUH Attributes: Unused 00:08:19.265 RUH Usage Desc #007: RUH Attributes: Unused 00:08:19.265 00:08:19.265 FDP statistics log page 00:08:19.265 ======================= 00:08:19.265 Host bytes with metadata written: 465608704 00:08:19.265 Media bytes with metadata written: 465661952 00:08:19.265 Media bytes erased: 0 00:08:19.265 00:08:19.265 FDP events log page 00:08:19.265 =================== 00:08:19.265 Number of FDP events: 0 00:08:19.265 00:08:19.265 NVM Specific Namespace Data 00:08:19.265 =========================== 00:08:19.265 Logical Block Storage Tag Mask: 0 00:08:19.265 Protection Information Capabilities: 00:08:19.265 16b Guard Protection Information Storage Tag Support: No 00:08:19.265 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.265 Storage Tag Check Read Support: No 00:08:19.265 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.265 ************************************ 00:08:19.265 END TEST nvme_identify 00:08:19.265 ************************************ 00:08:19.265 00:08:19.265 real 0m1.254s 00:08:19.265 user 0m0.455s 00:08:19.265 sys 0m0.567s 00:08:19.265 03:17:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.265 03:17:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 03:17:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:19.554 03:17:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.554 03:17:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.554 03:17:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 ************************************ 00:08:19.554 START TEST nvme_perf 00:08:19.554 ************************************ 00:08:19.554 03:17:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:19.554 03:17:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:20.967 Initializing NVMe Controllers 00:08:20.967 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.967 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.967 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.967 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.967 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:20.967 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:20.967 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:20.967 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:20.967 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:20.967 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:20.967 Initialization complete. Launching workers. 00:08:20.967 ======================================================== 00:08:20.967 Latency(us) 00:08:20.967 Device Information : IOPS MiB/s Average min max 00:08:20.967 PCIE (0000:00:13.0) NSID 1 from core 0: 7468.89 87.53 17146.81 10400.79 39753.49 00:08:20.967 PCIE (0000:00:10.0) NSID 1 from core 0: 7468.89 87.53 17131.79 9171.66 39008.67 00:08:20.967 PCIE (0000:00:11.0) NSID 1 from core 0: 7468.89 87.53 17112.74 8250.87 38391.45 00:08:20.967 PCIE (0000:00:12.0) NSID 1 from core 0: 7468.89 87.53 17089.52 7230.65 38929.27 00:08:20.967 PCIE (0000:00:12.0) NSID 2 from core 0: 7468.89 87.53 17062.06 6274.31 37858.21 00:08:20.967 PCIE (0000:00:12.0) NSID 3 from core 0: 7532.72 88.27 16891.18 5371.35 30098.93 00:08:20.967 ======================================================== 00:08:20.967 Total : 44877.16 525.90 17072.09 5371.35 39753.49 00:08:20.967 00:08:20.967 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:20.967 ================================================================================= 00:08:20.967 1.00000% : 13712.148us 00:08:20.967 10.00000% : 14821.218us 00:08:20.967 25.00000% : 15526.991us 00:08:20.967 50.00000% : 16636.062us 00:08:20.967 75.00000% : 18047.606us 00:08:20.967 90.00000% : 20164.923us 00:08:20.967 95.00000% : 20971.520us 00:08:20.968 98.00000% : 21878.942us 00:08:20.968 99.00000% : 30852.332us 00:08:20.968 99.50000% : 38918.302us 00:08:20.968 99.90000% : 39523.249us 00:08:20.968 99.99000% : 39926.548us 00:08:20.968 99.99900% : 39926.548us 00:08:20.968 99.99990% : 39926.548us 00:08:20.968 99.99999% : 39926.548us 00:08:20.968 00:08:20.968 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:20.968 ================================================================================= 00:08:20.968 1.00000% : 13712.148us 00:08:20.968 10.00000% : 14821.218us 00:08:20.968 25.00000% : 15426.166us 00:08:20.968 50.00000% : 16636.062us 00:08:20.968 75.00000% : 18148.431us 00:08:20.968 90.00000% : 20064.098us 00:08:20.968 95.00000% : 20870.695us 00:08:20.968 98.00000% : 22080.591us 00:08:20.968 99.00000% : 30650.683us 00:08:20.968 99.50000% : 38313.354us 00:08:20.968 99.90000% : 38918.302us 00:08:20.968 99.99000% : 39119.951us 00:08:20.968 99.99900% : 39119.951us 00:08:20.968 99.99990% : 39119.951us 00:08:20.968 99.99999% : 39119.951us 00:08:20.968 00:08:20.968 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:20.968 ================================================================================= 00:08:20.968 1.00000% : 13913.797us 00:08:20.968 10.00000% : 14821.218us 00:08:20.968 25.00000% : 15426.166us 00:08:20.968 50.00000% : 16535.237us 00:08:20.968 75.00000% : 18148.431us 00:08:20.968 90.00000% : 20064.098us 00:08:20.968 95.00000% : 20971.520us 00:08:20.968 98.00000% : 22181.415us 00:08:20.968 99.00000% : 30045.735us 00:08:20.968 99.50000% : 37708.406us 00:08:20.968 99.90000% : 38313.354us 00:08:20.968 99.99000% : 38515.003us 00:08:20.968 99.99900% : 38515.003us 00:08:20.968 99.99990% : 38515.003us 00:08:20.968 99.99999% : 38515.003us 00:08:20.968 00:08:20.968 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:20.968 ================================================================================= 00:08:20.968 1.00000% : 13712.148us 00:08:20.968 10.00000% : 14720.394us 00:08:20.968 25.00000% : 15325.342us 00:08:20.968 50.00000% : 16636.062us 00:08:20.968 75.00000% : 18350.080us 00:08:20.968 90.00000% : 20164.923us 00:08:20.968 95.00000% : 20971.520us 00:08:20.968 98.00000% : 22080.591us 00:08:20.968 99.00000% : 30650.683us 00:08:20.968 99.50000% : 38313.354us 00:08:20.968 99.90000% : 38918.302us 00:08:20.968 99.99000% : 39119.951us 00:08:20.968 99.99900% : 39119.951us 00:08:20.968 99.99990% : 39119.951us 00:08:20.968 99.99999% : 39119.951us 00:08:20.968 00:08:20.968 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:20.968 ================================================================================= 00:08:20.968 1.00000% : 12502.252us 00:08:20.968 10.00000% : 14720.394us 00:08:20.968 25.00000% : 15426.166us 00:08:20.968 50.00000% : 16736.886us 00:08:20.968 75.00000% : 18047.606us 00:08:20.968 90.00000% : 20164.923us 00:08:20.968 95.00000% : 20870.695us 00:08:20.968 98.00000% : 22080.591us 00:08:20.968 99.00000% : 29844.086us 00:08:20.968 99.50000% : 37103.458us 00:08:20.968 99.90000% : 37708.406us 00:08:20.968 99.99000% : 37910.055us 00:08:20.968 99.99900% : 37910.055us 00:08:20.968 99.99990% : 37910.055us 00:08:20.968 99.99999% : 37910.055us 00:08:20.968 00:08:20.968 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:20.968 ================================================================================= 00:08:20.968 1.00000% : 11544.418us 00:08:20.968 10.00000% : 14720.394us 00:08:20.968 25.00000% : 15426.166us 00:08:20.968 50.00000% : 16636.062us 00:08:20.968 75.00000% : 18047.606us 00:08:20.968 90.00000% : 20164.923us 00:08:20.968 95.00000% : 20870.695us 00:08:20.968 98.00000% : 21475.643us 00:08:20.968 99.00000% : 22080.591us 00:08:20.968 99.50000% : 29440.788us 00:08:20.968 99.90000% : 30045.735us 00:08:20.968 99.99000% : 30247.385us 00:08:20.968 99.99900% : 30247.385us 00:08:20.968 99.99990% : 30247.385us 00:08:20.968 99.99999% : 30247.385us 00:08:20.968 00:08:20.968 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:20.968 ============================================================================== 00:08:20.968 Range in us Cumulative IO count 00:08:20.968 10384.935 - 10435.348: 0.0401% ( 3) 00:08:20.968 10435.348 - 10485.760: 0.0801% ( 3) 00:08:20.968 10485.760 - 10536.172: 0.0935% ( 1) 00:08:20.968 10536.172 - 10586.585: 0.1202% ( 2) 00:08:20.968 10586.585 - 10636.997: 0.1469% ( 2) 00:08:20.968 10636.997 - 10687.409: 0.1870% ( 3) 00:08:20.968 10687.409 - 10737.822: 0.2137% ( 2) 00:08:20.968 10737.822 - 10788.234: 0.2404% ( 2) 00:08:20.968 10788.234 - 10838.646: 0.2671% ( 2) 00:08:20.968 10838.646 - 10889.058: 0.2938% ( 2) 00:08:20.968 10889.058 - 10939.471: 0.3205% ( 2) 00:08:20.968 10939.471 - 10989.883: 0.3472% ( 2) 00:08:20.968 10989.883 - 11040.295: 0.3873% ( 3) 00:08:20.968 11040.295 - 11090.708: 0.4140% ( 2) 00:08:20.968 11090.708 - 11141.120: 0.4407% ( 2) 00:08:20.968 11141.120 - 11191.532: 0.4674% ( 2) 00:08:20.968 11191.532 - 11241.945: 0.4941% ( 2) 00:08:20.968 11241.945 - 11292.357: 0.5208% ( 2) 00:08:20.968 11292.357 - 11342.769: 0.5475% ( 2) 00:08:20.968 11342.769 - 11393.182: 0.5743% ( 2) 00:08:20.968 11393.182 - 11443.594: 0.6010% ( 2) 00:08:20.968 11443.594 - 11494.006: 0.6277% ( 2) 00:08:20.968 11494.006 - 11544.418: 0.6544% ( 2) 00:08:20.968 11544.418 - 11594.831: 0.6944% ( 3) 00:08:20.968 11594.831 - 11645.243: 0.7212% ( 2) 00:08:20.968 11645.243 - 11695.655: 0.7479% ( 2) 00:08:20.968 11695.655 - 11746.068: 0.7746% ( 2) 00:08:20.968 11746.068 - 11796.480: 0.8013% ( 2) 00:08:20.968 11796.480 - 11846.892: 0.8280% ( 2) 00:08:20.968 11846.892 - 11897.305: 0.8547% ( 2) 00:08:20.968 13409.674 - 13510.498: 0.8681% ( 1) 00:08:20.968 13510.498 - 13611.323: 0.9215% ( 4) 00:08:20.968 13611.323 - 13712.148: 1.0417% ( 9) 00:08:20.968 13712.148 - 13812.972: 1.1084% ( 5) 00:08:20.968 13812.972 - 13913.797: 1.2286% ( 9) 00:08:20.968 13913.797 - 14014.622: 1.7361% ( 38) 00:08:20.968 14014.622 - 14115.446: 2.6976% ( 72) 00:08:20.968 14115.446 - 14216.271: 3.5256% ( 62) 00:08:20.968 14216.271 - 14317.095: 4.2201% ( 52) 00:08:20.968 14317.095 - 14417.920: 5.3152% ( 82) 00:08:20.968 14417.920 - 14518.745: 6.5972% ( 96) 00:08:20.968 14518.745 - 14619.569: 8.2399% ( 123) 00:08:20.968 14619.569 - 14720.394: 9.9092% ( 125) 00:08:20.968 14720.394 - 14821.218: 11.8189% ( 143) 00:08:20.968 14821.218 - 14922.043: 13.9690% ( 161) 00:08:20.968 14922.043 - 15022.868: 16.4129% ( 183) 00:08:20.968 15022.868 - 15123.692: 18.9370% ( 189) 00:08:20.968 15123.692 - 15224.517: 20.9802% ( 153) 00:08:20.968 15224.517 - 15325.342: 22.9300% ( 146) 00:08:20.968 15325.342 - 15426.166: 24.9466% ( 151) 00:08:20.968 15426.166 - 15526.991: 27.1234% ( 163) 00:08:20.968 15526.991 - 15627.815: 29.3937% ( 170) 00:08:20.968 15627.815 - 15728.640: 31.5037% ( 158) 00:08:20.968 15728.640 - 15829.465: 33.5737% ( 155) 00:08:20.968 15829.465 - 15930.289: 35.7639% ( 164) 00:08:20.968 15930.289 - 16031.114: 37.9274% ( 162) 00:08:20.968 16031.114 - 16131.938: 40.3846% ( 184) 00:08:20.968 16131.938 - 16232.763: 42.6015% ( 166) 00:08:20.968 16232.763 - 16333.588: 44.8050% ( 165) 00:08:20.968 16333.588 - 16434.412: 46.9818% ( 163) 00:08:20.968 16434.412 - 16535.237: 49.3323% ( 176) 00:08:20.968 16535.237 - 16636.062: 51.5091% ( 163) 00:08:20.968 16636.062 - 16736.886: 53.5657% ( 154) 00:08:20.968 16736.886 - 16837.711: 55.5823% ( 151) 00:08:20.968 16837.711 - 16938.535: 57.9594% ( 178) 00:08:20.968 16938.535 - 17039.360: 60.1896% ( 167) 00:08:20.968 17039.360 - 17140.185: 62.0994% ( 143) 00:08:20.968 17140.185 - 17241.009: 63.9824% ( 141) 00:08:20.968 17241.009 - 17341.834: 65.7853% ( 135) 00:08:20.968 17341.834 - 17442.658: 67.4412% ( 124) 00:08:20.968 17442.658 - 17543.483: 69.0705% ( 122) 00:08:20.968 17543.483 - 17644.308: 70.6197% ( 116) 00:08:20.968 17644.308 - 17745.132: 71.9418% ( 99) 00:08:20.968 17745.132 - 17845.957: 73.0903% ( 86) 00:08:20.968 17845.957 - 17946.782: 74.1987% ( 83) 00:08:20.968 17946.782 - 18047.606: 75.1068% ( 68) 00:08:20.968 18047.606 - 18148.431: 75.8146% ( 53) 00:08:20.968 18148.431 - 18249.255: 76.6026% ( 59) 00:08:20.968 18249.255 - 18350.080: 77.3104% ( 53) 00:08:20.968 18350.080 - 18450.905: 78.1784% ( 65) 00:08:20.968 18450.905 - 18551.729: 78.9797% ( 60) 00:08:20.968 18551.729 - 18652.554: 79.6341% ( 49) 00:08:20.968 18652.554 - 18753.378: 80.3419% ( 53) 00:08:20.968 18753.378 - 18854.203: 81.1165% ( 58) 00:08:20.968 18854.203 - 18955.028: 81.9578% ( 63) 00:08:20.969 18955.028 - 19055.852: 82.6522% ( 52) 00:08:20.969 19055.852 - 19156.677: 83.2933% ( 48) 00:08:20.969 19156.677 - 19257.502: 84.0545% ( 57) 00:08:20.969 19257.502 - 19358.326: 84.8958% ( 63) 00:08:20.969 19358.326 - 19459.151: 85.7505% ( 64) 00:08:20.969 19459.151 - 19559.975: 86.4583% ( 53) 00:08:20.969 19559.975 - 19660.800: 87.1795% ( 54) 00:08:20.969 19660.800 - 19761.625: 87.8606% ( 51) 00:08:20.969 19761.625 - 19862.449: 88.5016% ( 48) 00:08:20.969 19862.449 - 19963.274: 89.1960% ( 52) 00:08:20.969 19963.274 - 20064.098: 89.9172% ( 54) 00:08:20.969 20064.098 - 20164.923: 90.4781% ( 42) 00:08:20.969 20164.923 - 20265.748: 91.0924% ( 46) 00:08:20.969 20265.748 - 20366.572: 91.7334% ( 48) 00:08:20.969 20366.572 - 20467.397: 92.3344% ( 45) 00:08:20.969 20467.397 - 20568.222: 92.9220% ( 44) 00:08:20.969 20568.222 - 20669.046: 93.4428% ( 39) 00:08:20.969 20669.046 - 20769.871: 94.0438% ( 45) 00:08:20.969 20769.871 - 20870.695: 94.6047% ( 42) 00:08:20.969 20870.695 - 20971.520: 95.2057% ( 45) 00:08:20.969 20971.520 - 21072.345: 95.7933% ( 44) 00:08:20.969 21072.345 - 21173.169: 96.1405% ( 26) 00:08:20.969 21173.169 - 21273.994: 96.5144% ( 28) 00:08:20.969 21273.994 - 21374.818: 96.9151% ( 30) 00:08:20.969 21374.818 - 21475.643: 97.3424% ( 32) 00:08:20.969 21475.643 - 21576.468: 97.6362% ( 22) 00:08:20.969 21576.468 - 21677.292: 97.8499% ( 16) 00:08:20.969 21677.292 - 21778.117: 97.9701% ( 9) 00:08:20.969 21778.117 - 21878.942: 98.0903% ( 9) 00:08:20.969 21878.942 - 21979.766: 98.1437% ( 4) 00:08:20.969 21979.766 - 22080.591: 98.1838% ( 3) 00:08:20.969 22080.591 - 22181.415: 98.2372% ( 4) 00:08:20.969 22181.415 - 22282.240: 98.2772% ( 3) 00:08:20.969 22282.240 - 22383.065: 98.2906% ( 1) 00:08:20.969 29642.437 - 29844.086: 98.3974% ( 8) 00:08:20.969 29844.086 - 30045.735: 98.5443% ( 11) 00:08:20.969 30045.735 - 30247.385: 98.6912% ( 11) 00:08:20.969 30247.385 - 30449.034: 98.8248% ( 10) 00:08:20.969 30449.034 - 30650.683: 98.9450% ( 9) 00:08:20.969 30650.683 - 30852.332: 99.1052% ( 12) 00:08:20.969 30852.332 - 31053.982: 99.1453% ( 3) 00:08:20.969 38111.705 - 38313.354: 99.2121% ( 5) 00:08:20.969 38313.354 - 38515.003: 99.3456% ( 10) 00:08:20.969 38515.003 - 38716.652: 99.4925% ( 11) 00:08:20.969 38716.652 - 38918.302: 99.6394% ( 11) 00:08:20.969 38918.302 - 39119.951: 99.7196% ( 6) 00:08:20.969 39119.951 - 39321.600: 99.8130% ( 7) 00:08:20.969 39321.600 - 39523.249: 99.9065% ( 7) 00:08:20.969 39523.249 - 39724.898: 99.9866% ( 6) 00:08:20.969 39724.898 - 39926.548: 100.0000% ( 1) 00:08:20.969 00:08:20.969 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:20.969 ============================================================================== 00:08:20.969 Range in us Cumulative IO count 00:08:20.969 9124.628 - 9175.040: 0.0134% ( 1) 00:08:20.969 9175.040 - 9225.452: 0.0267% ( 1) 00:08:20.969 9225.452 - 9275.865: 0.0534% ( 2) 00:08:20.969 9275.865 - 9326.277: 0.0801% ( 2) 00:08:20.969 9326.277 - 9376.689: 0.1068% ( 2) 00:08:20.969 9376.689 - 9427.102: 0.1335% ( 2) 00:08:20.969 9427.102 - 9477.514: 0.1469% ( 1) 00:08:20.969 9477.514 - 9527.926: 0.1736% ( 2) 00:08:20.969 9527.926 - 9578.338: 0.1870% ( 1) 00:08:20.969 9578.338 - 9628.751: 0.2003% ( 1) 00:08:20.969 9679.163 - 9729.575: 0.2537% ( 4) 00:08:20.969 9729.575 - 9779.988: 0.2938% ( 3) 00:08:20.969 9779.988 - 9830.400: 0.3205% ( 2) 00:08:20.969 9830.400 - 9880.812: 0.3472% ( 2) 00:08:20.969 9880.812 - 9931.225: 0.3606% ( 1) 00:08:20.969 9931.225 - 9981.637: 0.3873% ( 2) 00:08:20.969 9981.637 - 10032.049: 0.4006% ( 1) 00:08:20.969 10032.049 - 10082.462: 0.4140% ( 1) 00:08:20.969 10082.462 - 10132.874: 0.4407% ( 2) 00:08:20.969 10132.874 - 10183.286: 0.4674% ( 2) 00:08:20.969 10183.286 - 10233.698: 0.4941% ( 2) 00:08:20.969 10233.698 - 10284.111: 0.5208% ( 2) 00:08:20.969 10284.111 - 10334.523: 0.5475% ( 2) 00:08:20.969 10334.523 - 10384.935: 0.5609% ( 1) 00:08:20.969 10384.935 - 10435.348: 0.5876% ( 2) 00:08:20.969 10435.348 - 10485.760: 0.6143% ( 2) 00:08:20.969 10485.760 - 10536.172: 0.6544% ( 3) 00:08:20.969 10586.585 - 10636.997: 0.6944% ( 3) 00:08:20.969 10636.997 - 10687.409: 0.7212% ( 2) 00:08:20.969 10687.409 - 10737.822: 0.7345% ( 1) 00:08:20.969 10788.234 - 10838.646: 0.7612% ( 2) 00:08:20.969 10838.646 - 10889.058: 0.8013% ( 3) 00:08:20.969 10889.058 - 10939.471: 0.8413% ( 3) 00:08:20.969 10939.471 - 10989.883: 0.8547% ( 1) 00:08:20.969 13409.674 - 13510.498: 0.8681% ( 1) 00:08:20.969 13510.498 - 13611.323: 0.9749% ( 8) 00:08:20.969 13611.323 - 13712.148: 1.1485% ( 13) 00:08:20.969 13712.148 - 13812.972: 1.3622% ( 16) 00:08:20.969 13812.972 - 13913.797: 1.7895% ( 32) 00:08:20.969 13913.797 - 14014.622: 2.1100% ( 24) 00:08:20.969 14014.622 - 14115.446: 2.8312% ( 54) 00:08:20.969 14115.446 - 14216.271: 3.5924% ( 57) 00:08:20.969 14216.271 - 14317.095: 4.5406% ( 71) 00:08:20.969 14317.095 - 14417.920: 5.5823% ( 78) 00:08:20.969 14417.920 - 14518.745: 6.8777% ( 97) 00:08:20.969 14518.745 - 14619.569: 8.6138% ( 130) 00:08:20.969 14619.569 - 14720.394: 9.8825% ( 95) 00:08:20.969 14720.394 - 14821.218: 12.3130% ( 182) 00:08:20.969 14821.218 - 14922.043: 14.2895% ( 148) 00:08:20.969 14922.043 - 15022.868: 16.3729% ( 156) 00:08:20.969 15022.868 - 15123.692: 18.1490% ( 133) 00:08:20.969 15123.692 - 15224.517: 20.5662% ( 181) 00:08:20.969 15224.517 - 15325.342: 22.9834% ( 181) 00:08:20.969 15325.342 - 15426.166: 25.2804% ( 172) 00:08:20.969 15426.166 - 15526.991: 27.4439% ( 162) 00:08:20.969 15526.991 - 15627.815: 29.4471% ( 150) 00:08:20.969 15627.815 - 15728.640: 31.8510% ( 180) 00:08:20.969 15728.640 - 15829.465: 33.7874% ( 145) 00:08:20.969 15829.465 - 15930.289: 35.9642% ( 163) 00:08:20.969 15930.289 - 16031.114: 38.5951% ( 197) 00:08:20.969 16031.114 - 16131.938: 40.6918% ( 157) 00:08:20.969 16131.938 - 16232.763: 42.8152% ( 159) 00:08:20.969 16232.763 - 16333.588: 45.0187% ( 165) 00:08:20.969 16333.588 - 16434.412: 46.7548% ( 130) 00:08:20.969 16434.412 - 16535.237: 48.6245% ( 140) 00:08:20.969 16535.237 - 16636.062: 50.5876% ( 147) 00:08:20.969 16636.062 - 16736.886: 52.7244% ( 160) 00:08:20.969 16736.886 - 16837.711: 54.6875% ( 147) 00:08:20.969 16837.711 - 16938.535: 57.0913% ( 180) 00:08:20.969 16938.535 - 17039.360: 59.2949% ( 165) 00:08:20.969 17039.360 - 17140.185: 61.2179% ( 144) 00:08:20.969 17140.185 - 17241.009: 63.3280% ( 158) 00:08:20.969 17241.009 - 17341.834: 64.8771% ( 116) 00:08:20.969 17341.834 - 17442.658: 66.8269% ( 146) 00:08:20.969 17442.658 - 17543.483: 68.2692% ( 108) 00:08:20.969 17543.483 - 17644.308: 69.9119% ( 123) 00:08:20.969 17644.308 - 17745.132: 71.4610% ( 116) 00:08:20.969 17745.132 - 17845.957: 72.7297% ( 95) 00:08:20.969 17845.957 - 17946.782: 73.9183% ( 89) 00:08:20.969 17946.782 - 18047.606: 74.9599% ( 78) 00:08:20.969 18047.606 - 18148.431: 76.0550% ( 82) 00:08:20.969 18148.431 - 18249.255: 76.7094% ( 49) 00:08:20.969 18249.255 - 18350.080: 77.6709% ( 72) 00:08:20.969 18350.080 - 18450.905: 78.4589% ( 59) 00:08:20.969 18450.905 - 18551.729: 79.0198% ( 42) 00:08:20.969 18551.729 - 18652.554: 79.7943% ( 58) 00:08:20.969 18652.554 - 18753.378: 80.3819% ( 44) 00:08:20.969 18753.378 - 18854.203: 81.0363% ( 49) 00:08:20.969 18854.203 - 18955.028: 81.6239% ( 44) 00:08:20.969 18955.028 - 19055.852: 82.3985% ( 58) 00:08:20.969 19055.852 - 19156.677: 83.1063% ( 53) 00:08:20.969 19156.677 - 19257.502: 83.7473% ( 48) 00:08:20.969 19257.502 - 19358.326: 84.5085% ( 57) 00:08:20.969 19358.326 - 19459.151: 85.4033% ( 67) 00:08:20.969 19459.151 - 19559.975: 86.2847% ( 66) 00:08:20.969 19559.975 - 19660.800: 86.9792% ( 52) 00:08:20.969 19660.800 - 19761.625: 87.7137% ( 55) 00:08:20.969 19761.625 - 19862.449: 88.4615% ( 56) 00:08:20.969 19862.449 - 19963.274: 89.4231% ( 72) 00:08:20.969 19963.274 - 20064.098: 90.2110% ( 59) 00:08:20.969 20064.098 - 20164.923: 90.9989% ( 59) 00:08:20.969 20164.923 - 20265.748: 91.9872% ( 74) 00:08:20.969 20265.748 - 20366.572: 92.6816% ( 52) 00:08:20.969 20366.572 - 20467.397: 93.2826% ( 45) 00:08:20.969 20467.397 - 20568.222: 93.7901% ( 38) 00:08:20.970 20568.222 - 20669.046: 94.2708% ( 36) 00:08:20.970 20669.046 - 20769.871: 94.6848% ( 31) 00:08:20.970 20769.871 - 20870.695: 95.0454% ( 27) 00:08:20.970 20870.695 - 20971.520: 95.4460% ( 30) 00:08:20.970 20971.520 - 21072.345: 95.8333% ( 29) 00:08:20.970 21072.345 - 21173.169: 96.3275% ( 37) 00:08:20.970 21173.169 - 21273.994: 96.7415% ( 31) 00:08:20.970 21273.994 - 21374.818: 96.9551% ( 16) 00:08:20.970 21374.818 - 21475.643: 97.3157% ( 27) 00:08:20.970 21475.643 - 21576.468: 97.5160% ( 15) 00:08:20.970 21576.468 - 21677.292: 97.6629% ( 11) 00:08:20.970 21677.292 - 21778.117: 97.7431% ( 6) 00:08:20.970 21778.117 - 21878.942: 97.8232% ( 6) 00:08:20.970 21878.942 - 21979.766: 97.9033% ( 6) 00:08:20.970 21979.766 - 22080.591: 98.0369% ( 10) 00:08:20.970 22080.591 - 22181.415: 98.1571% ( 9) 00:08:20.970 22181.415 - 22282.240: 98.2105% ( 4) 00:08:20.970 22282.240 - 22383.065: 98.2772% ( 5) 00:08:20.970 22383.065 - 22483.889: 98.2906% ( 1) 00:08:20.970 29239.138 - 29440.788: 98.3707% ( 6) 00:08:20.970 29440.788 - 29642.437: 98.5176% ( 11) 00:08:20.970 29642.437 - 29844.086: 98.6245% ( 8) 00:08:20.970 29844.086 - 30045.735: 98.8381% ( 16) 00:08:20.970 30045.735 - 30247.385: 98.8782% ( 3) 00:08:20.970 30247.385 - 30449.034: 98.9850% ( 8) 00:08:20.970 30449.034 - 30650.683: 99.1052% ( 9) 00:08:20.970 30650.683 - 30852.332: 99.1453% ( 3) 00:08:20.970 37506.757 - 37708.406: 99.2121% ( 5) 00:08:20.970 37708.406 - 37910.055: 99.3056% ( 7) 00:08:20.970 37910.055 - 38111.705: 99.4257% ( 9) 00:08:20.970 38111.705 - 38313.354: 99.5192% ( 7) 00:08:20.970 38313.354 - 38515.003: 99.6795% ( 12) 00:08:20.970 38515.003 - 38716.652: 99.7596% ( 6) 00:08:20.970 38716.652 - 38918.302: 99.9466% ( 14) 00:08:20.970 38918.302 - 39119.951: 100.0000% ( 4) 00:08:20.970 00:08:20.970 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:20.970 ============================================================================== 00:08:20.970 Range in us Cumulative IO count 00:08:20.970 8217.206 - 8267.618: 0.0267% ( 2) 00:08:20.970 8267.618 - 8318.031: 0.0534% ( 2) 00:08:20.970 8318.031 - 8368.443: 0.0801% ( 2) 00:08:20.970 8418.855 - 8469.268: 0.1068% ( 2) 00:08:20.970 8469.268 - 8519.680: 0.1335% ( 2) 00:08:20.970 8519.680 - 8570.092: 0.1736% ( 3) 00:08:20.970 8570.092 - 8620.505: 0.2003% ( 2) 00:08:20.970 8620.505 - 8670.917: 0.2270% ( 2) 00:08:20.970 8670.917 - 8721.329: 0.2537% ( 2) 00:08:20.970 8721.329 - 8771.742: 0.2804% ( 2) 00:08:20.970 8771.742 - 8822.154: 0.3072% ( 2) 00:08:20.970 8822.154 - 8872.566: 0.3472% ( 3) 00:08:20.970 8872.566 - 8922.978: 0.3739% ( 2) 00:08:20.970 8922.978 - 8973.391: 0.3873% ( 1) 00:08:20.970 8973.391 - 9023.803: 0.4140% ( 2) 00:08:20.970 9023.803 - 9074.215: 0.4407% ( 2) 00:08:20.970 9074.215 - 9124.628: 0.4674% ( 2) 00:08:20.970 9124.628 - 9175.040: 0.4941% ( 2) 00:08:20.970 9175.040 - 9225.452: 0.5208% ( 2) 00:08:20.970 9225.452 - 9275.865: 0.5475% ( 2) 00:08:20.970 9275.865 - 9326.277: 0.5743% ( 2) 00:08:20.970 9326.277 - 9376.689: 0.6143% ( 3) 00:08:20.970 9376.689 - 9427.102: 0.6410% ( 2) 00:08:20.970 9427.102 - 9477.514: 0.6677% ( 2) 00:08:20.970 9477.514 - 9527.926: 0.6944% ( 2) 00:08:20.970 9527.926 - 9578.338: 0.7212% ( 2) 00:08:20.970 9578.338 - 9628.751: 0.7479% ( 2) 00:08:20.970 9628.751 - 9679.163: 0.7746% ( 2) 00:08:20.970 9679.163 - 9729.575: 0.8013% ( 2) 00:08:20.970 9729.575 - 9779.988: 0.8413% ( 3) 00:08:20.970 9779.988 - 9830.400: 0.8547% ( 1) 00:08:20.970 13712.148 - 13812.972: 0.9482% ( 7) 00:08:20.970 13812.972 - 13913.797: 1.1351% ( 14) 00:08:20.970 13913.797 - 14014.622: 1.5892% ( 34) 00:08:20.970 14014.622 - 14115.446: 2.2169% ( 47) 00:08:20.970 14115.446 - 14216.271: 3.1384% ( 69) 00:08:20.970 14216.271 - 14317.095: 4.1132% ( 73) 00:08:20.970 14317.095 - 14417.920: 5.1683% ( 79) 00:08:20.970 14417.920 - 14518.745: 6.4236% ( 94) 00:08:20.970 14518.745 - 14619.569: 8.0262% ( 120) 00:08:20.970 14619.569 - 14720.394: 9.8024% ( 133) 00:08:20.970 14720.394 - 14821.218: 11.8723% ( 155) 00:08:20.970 14821.218 - 14922.043: 13.7420% ( 140) 00:08:20.970 14922.043 - 15022.868: 15.5449% ( 135) 00:08:20.970 15022.868 - 15123.692: 17.8953% ( 176) 00:08:20.970 15123.692 - 15224.517: 20.4861% ( 194) 00:08:20.970 15224.517 - 15325.342: 22.7831% ( 172) 00:08:20.970 15325.342 - 15426.166: 25.4006% ( 196) 00:08:20.970 15426.166 - 15526.991: 27.9113% ( 188) 00:08:20.970 15526.991 - 15627.815: 30.2217% ( 173) 00:08:20.970 15627.815 - 15728.640: 32.5721% ( 176) 00:08:20.970 15728.640 - 15829.465: 34.8958% ( 174) 00:08:20.970 15829.465 - 15930.289: 37.3531% ( 184) 00:08:20.970 15930.289 - 16031.114: 39.7169% ( 177) 00:08:20.970 16031.114 - 16131.938: 42.3878% ( 200) 00:08:20.970 16131.938 - 16232.763: 44.6848% ( 172) 00:08:20.970 16232.763 - 16333.588: 46.5946% ( 143) 00:08:20.970 16333.588 - 16434.412: 48.3707% ( 133) 00:08:20.970 16434.412 - 16535.237: 50.3205% ( 146) 00:08:20.970 16535.237 - 16636.062: 52.1368% ( 136) 00:08:20.970 16636.062 - 16736.886: 54.0198% ( 141) 00:08:20.970 16736.886 - 16837.711: 56.2233% ( 165) 00:08:20.970 16837.711 - 16938.535: 58.0796% ( 139) 00:08:20.970 16938.535 - 17039.360: 59.9092% ( 137) 00:08:20.970 17039.360 - 17140.185: 61.4717% ( 117) 00:08:20.970 17140.185 - 17241.009: 63.1677% ( 127) 00:08:20.970 17241.009 - 17341.834: 64.8638% ( 127) 00:08:20.970 17341.834 - 17442.658: 66.5865% ( 129) 00:08:20.970 17442.658 - 17543.483: 68.2292% ( 123) 00:08:20.970 17543.483 - 17644.308: 69.6581% ( 107) 00:08:20.970 17644.308 - 17745.132: 70.9669% ( 98) 00:08:20.970 17745.132 - 17845.957: 72.2089% ( 93) 00:08:20.970 17845.957 - 17946.782: 73.3440% ( 85) 00:08:20.970 17946.782 - 18047.606: 74.4391% ( 82) 00:08:20.970 18047.606 - 18148.431: 75.5609% ( 84) 00:08:20.970 18148.431 - 18249.255: 76.6560% ( 82) 00:08:20.970 18249.255 - 18350.080: 77.5507% ( 67) 00:08:20.970 18350.080 - 18450.905: 78.2051% ( 49) 00:08:20.970 18450.905 - 18551.729: 78.7794% ( 43) 00:08:20.970 18551.729 - 18652.554: 79.5540% ( 58) 00:08:20.970 18652.554 - 18753.378: 80.1683% ( 46) 00:08:20.970 18753.378 - 18854.203: 80.7425% ( 43) 00:08:20.970 18854.203 - 18955.028: 81.3969% ( 49) 00:08:20.970 18955.028 - 19055.852: 82.2115% ( 61) 00:08:20.970 19055.852 - 19156.677: 82.9460% ( 55) 00:08:20.970 19156.677 - 19257.502: 83.6538% ( 53) 00:08:20.970 19257.502 - 19358.326: 84.2682% ( 46) 00:08:20.970 19358.326 - 19459.151: 85.0962% ( 62) 00:08:20.970 19459.151 - 19559.975: 85.9108% ( 61) 00:08:20.970 19559.975 - 19660.800: 86.8456% ( 70) 00:08:20.970 19660.800 - 19761.625: 87.6603% ( 61) 00:08:20.970 19761.625 - 19862.449: 88.3948% ( 55) 00:08:20.970 19862.449 - 19963.274: 89.1960% ( 60) 00:08:20.970 19963.274 - 20064.098: 90.1576% ( 72) 00:08:20.970 20064.098 - 20164.923: 90.9589% ( 60) 00:08:20.970 20164.923 - 20265.748: 91.6800% ( 54) 00:08:20.970 20265.748 - 20366.572: 92.2142% ( 40) 00:08:20.970 20366.572 - 20467.397: 92.7484% ( 40) 00:08:20.970 20467.397 - 20568.222: 93.3761% ( 47) 00:08:20.970 20568.222 - 20669.046: 93.9236% ( 41) 00:08:20.970 20669.046 - 20769.871: 94.4845% ( 42) 00:08:20.970 20769.871 - 20870.695: 94.8584% ( 28) 00:08:20.970 20870.695 - 20971.520: 95.2991% ( 33) 00:08:20.970 20971.520 - 21072.345: 95.7265% ( 32) 00:08:20.970 21072.345 - 21173.169: 96.0871% ( 27) 00:08:20.970 21173.169 - 21273.994: 96.3675% ( 21) 00:08:20.970 21273.994 - 21374.818: 96.6880% ( 24) 00:08:20.970 21374.818 - 21475.643: 96.9551% ( 20) 00:08:20.970 21475.643 - 21576.468: 97.1688% ( 16) 00:08:20.970 21576.468 - 21677.292: 97.3958% ( 17) 00:08:20.970 21677.292 - 21778.117: 97.5427% ( 11) 00:08:20.970 21778.117 - 21878.942: 97.7030% ( 12) 00:08:20.970 21878.942 - 21979.766: 97.8499% ( 11) 00:08:20.970 21979.766 - 22080.591: 97.9701% ( 9) 00:08:20.970 22080.591 - 22181.415: 98.0769% ( 8) 00:08:20.970 22181.415 - 22282.240: 98.1437% ( 5) 00:08:20.970 22282.240 - 22383.065: 98.2105% ( 5) 00:08:20.970 22383.065 - 22483.889: 98.2772% ( 5) 00:08:20.970 22483.889 - 22584.714: 98.2906% ( 1) 00:08:20.970 28835.840 - 29037.489: 98.4108% ( 9) 00:08:20.970 29037.489 - 29239.138: 98.5443% ( 10) 00:08:20.970 29239.138 - 29440.788: 98.6779% ( 10) 00:08:20.970 29440.788 - 29642.437: 98.8114% ( 10) 00:08:20.970 29642.437 - 29844.086: 98.9450% ( 10) 00:08:20.970 29844.086 - 30045.735: 99.0652% ( 9) 00:08:20.970 30045.735 - 30247.385: 99.1453% ( 6) 00:08:20.970 36901.809 - 37103.458: 99.1587% ( 1) 00:08:20.970 37103.458 - 37305.108: 99.2655% ( 8) 00:08:20.970 37305.108 - 37506.757: 99.4124% ( 11) 00:08:20.970 37506.757 - 37708.406: 99.5326% ( 9) 00:08:20.970 37708.406 - 37910.055: 99.6661% ( 10) 00:08:20.970 37910.055 - 38111.705: 99.8130% ( 11) 00:08:20.970 38111.705 - 38313.354: 99.9332% ( 9) 00:08:20.970 38313.354 - 38515.003: 100.0000% ( 5) 00:08:20.970 00:08:20.970 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:20.971 ============================================================================== 00:08:20.971 Range in us Cumulative IO count 00:08:20.971 7208.960 - 7259.372: 0.1202% ( 9) 00:08:20.971 7259.372 - 7309.785: 0.2404% ( 9) 00:08:20.971 7309.785 - 7360.197: 0.2938% ( 4) 00:08:20.971 7360.197 - 7410.609: 0.3205% ( 2) 00:08:20.971 7410.609 - 7461.022: 0.3339% ( 1) 00:08:20.971 7461.022 - 7511.434: 0.4140% ( 6) 00:08:20.971 7511.434 - 7561.846: 0.4541% ( 3) 00:08:20.971 7561.846 - 7612.258: 0.5208% ( 5) 00:08:20.971 7612.258 - 7662.671: 0.5743% ( 4) 00:08:20.971 7662.671 - 7713.083: 0.6277% ( 4) 00:08:20.971 7713.083 - 7763.495: 0.6811% ( 4) 00:08:20.971 7763.495 - 7813.908: 0.7345% ( 4) 00:08:20.971 7813.908 - 7864.320: 0.8013% ( 5) 00:08:20.971 7864.320 - 7914.732: 0.8280% ( 2) 00:08:20.971 7914.732 - 7965.145: 0.8413% ( 1) 00:08:20.971 7965.145 - 8015.557: 0.8547% ( 1) 00:08:20.971 13409.674 - 13510.498: 0.8681% ( 1) 00:08:20.971 13510.498 - 13611.323: 0.9215% ( 4) 00:08:20.971 13611.323 - 13712.148: 1.1485% ( 17) 00:08:20.971 13712.148 - 13812.972: 1.4290% ( 21) 00:08:20.971 13812.972 - 13913.797: 1.8162% ( 29) 00:08:20.971 13913.797 - 14014.622: 2.4439% ( 47) 00:08:20.971 14014.622 - 14115.446: 3.2185% ( 58) 00:08:20.971 14115.446 - 14216.271: 4.0732% ( 64) 00:08:20.971 14216.271 - 14317.095: 4.9813% ( 68) 00:08:20.971 14317.095 - 14417.920: 5.9028% ( 69) 00:08:20.971 14417.920 - 14518.745: 7.3851% ( 111) 00:08:20.971 14518.745 - 14619.569: 9.1880% ( 135) 00:08:20.971 14619.569 - 14720.394: 11.2981% ( 158) 00:08:20.971 14720.394 - 14821.218: 13.5283% ( 167) 00:08:20.971 14821.218 - 14922.043: 15.8520% ( 174) 00:08:20.971 14922.043 - 15022.868: 18.4428% ( 194) 00:08:20.971 15022.868 - 15123.692: 20.7666% ( 174) 00:08:20.971 15123.692 - 15224.517: 23.1971% ( 182) 00:08:20.971 15224.517 - 15325.342: 25.4274% ( 167) 00:08:20.971 15325.342 - 15426.166: 27.4973% ( 155) 00:08:20.971 15426.166 - 15526.991: 29.3937% ( 142) 00:08:20.971 15526.991 - 15627.815: 31.5304% ( 160) 00:08:20.971 15627.815 - 15728.640: 33.8809% ( 176) 00:08:20.971 15728.640 - 15829.465: 35.9642% ( 156) 00:08:20.971 15829.465 - 15930.289: 37.9941% ( 152) 00:08:20.971 15930.289 - 16031.114: 40.0908% ( 157) 00:08:20.971 16031.114 - 16131.938: 42.0807% ( 149) 00:08:20.971 16131.938 - 16232.763: 43.9770% ( 142) 00:08:20.971 16232.763 - 16333.588: 46.0069% ( 152) 00:08:20.971 16333.588 - 16434.412: 48.0235% ( 151) 00:08:20.971 16434.412 - 16535.237: 49.8531% ( 137) 00:08:20.971 16535.237 - 16636.062: 51.6026% ( 131) 00:08:20.971 16636.062 - 16736.886: 53.4589% ( 139) 00:08:20.971 16736.886 - 16837.711: 55.2618% ( 135) 00:08:20.971 16837.711 - 16938.535: 57.4920% ( 167) 00:08:20.971 16938.535 - 17039.360: 59.4551% ( 147) 00:08:20.971 17039.360 - 17140.185: 61.3782% ( 144) 00:08:20.971 17140.185 - 17241.009: 63.2879% ( 143) 00:08:20.971 17241.009 - 17341.834: 64.9172% ( 122) 00:08:20.971 17341.834 - 17442.658: 66.4663% ( 116) 00:08:20.971 17442.658 - 17543.483: 67.9087% ( 108) 00:08:20.971 17543.483 - 17644.308: 69.2975% ( 104) 00:08:20.971 17644.308 - 17745.132: 70.5395% ( 93) 00:08:20.971 17745.132 - 17845.957: 71.6346% ( 82) 00:08:20.971 17845.957 - 17946.782: 72.6095% ( 73) 00:08:20.971 17946.782 - 18047.606: 73.3307% ( 54) 00:08:20.971 18047.606 - 18148.431: 74.0385% ( 53) 00:08:20.971 18148.431 - 18249.255: 74.7463% ( 53) 00:08:20.971 18249.255 - 18350.080: 75.6277% ( 66) 00:08:20.971 18350.080 - 18450.905: 76.5091% ( 66) 00:08:20.971 18450.905 - 18551.729: 77.3104% ( 60) 00:08:20.971 18551.729 - 18652.554: 78.2185% ( 68) 00:08:20.971 18652.554 - 18753.378: 79.3403% ( 84) 00:08:20.971 18753.378 - 18854.203: 80.4487% ( 83) 00:08:20.971 18854.203 - 18955.028: 81.5304% ( 81) 00:08:20.971 18955.028 - 19055.852: 82.6389% ( 83) 00:08:20.971 19055.852 - 19156.677: 83.5737% ( 70) 00:08:20.971 19156.677 - 19257.502: 84.4151% ( 63) 00:08:20.971 19257.502 - 19358.326: 85.3232% ( 68) 00:08:20.971 19358.326 - 19459.151: 86.2046% ( 66) 00:08:20.971 19459.151 - 19559.975: 87.0860% ( 66) 00:08:20.971 19559.975 - 19660.800: 87.8072% ( 54) 00:08:20.971 19660.800 - 19761.625: 88.3280% ( 39) 00:08:20.971 19761.625 - 19862.449: 88.8622% ( 40) 00:08:20.971 19862.449 - 19963.274: 89.4498% ( 44) 00:08:20.971 19963.274 - 20064.098: 89.9840% ( 40) 00:08:20.971 20064.098 - 20164.923: 90.7185% ( 55) 00:08:20.971 20164.923 - 20265.748: 91.3996% ( 51) 00:08:20.971 20265.748 - 20366.572: 92.0272% ( 47) 00:08:20.971 20366.572 - 20467.397: 92.6549% ( 47) 00:08:20.971 20467.397 - 20568.222: 93.0823% ( 32) 00:08:20.971 20568.222 - 20669.046: 93.5497% ( 35) 00:08:20.971 20669.046 - 20769.871: 94.0171% ( 35) 00:08:20.971 20769.871 - 20870.695: 94.5780% ( 42) 00:08:20.971 20870.695 - 20971.520: 95.0988% ( 39) 00:08:20.971 20971.520 - 21072.345: 95.5662% ( 35) 00:08:20.971 21072.345 - 21173.169: 96.0604% ( 37) 00:08:20.971 21173.169 - 21273.994: 96.4476% ( 29) 00:08:20.971 21273.994 - 21374.818: 96.7949% ( 26) 00:08:20.971 21374.818 - 21475.643: 97.2222% ( 32) 00:08:20.971 21475.643 - 21576.468: 97.4359% ( 16) 00:08:20.971 21576.468 - 21677.292: 97.5962% ( 12) 00:08:20.971 21677.292 - 21778.117: 97.7564% ( 12) 00:08:20.971 21778.117 - 21878.942: 97.8632% ( 8) 00:08:20.971 21878.942 - 21979.766: 97.9434% ( 6) 00:08:20.971 21979.766 - 22080.591: 98.0235% ( 6) 00:08:20.971 22080.591 - 22181.415: 98.0903% ( 5) 00:08:20.971 22181.415 - 22282.240: 98.1571% ( 5) 00:08:20.971 22282.240 - 22383.065: 98.2372% ( 6) 00:08:20.971 22383.065 - 22483.889: 98.2906% ( 4) 00:08:20.971 29037.489 - 29239.138: 98.3040% ( 1) 00:08:20.971 29239.138 - 29440.788: 98.3707% ( 5) 00:08:20.971 29440.788 - 29642.437: 98.4909% ( 9) 00:08:20.971 29642.437 - 29844.086: 98.6111% ( 9) 00:08:20.971 29844.086 - 30045.735: 98.7179% ( 8) 00:08:20.971 30045.735 - 30247.385: 98.8114% ( 7) 00:08:20.971 30247.385 - 30449.034: 98.9183% ( 8) 00:08:20.971 30449.034 - 30650.683: 99.0251% ( 8) 00:08:20.971 30650.683 - 30852.332: 99.1453% ( 9) 00:08:20.971 37506.757 - 37708.406: 99.2388% ( 7) 00:08:20.971 37708.406 - 37910.055: 99.3456% ( 8) 00:08:20.971 37910.055 - 38111.705: 99.4792% ( 10) 00:08:20.971 38111.705 - 38313.354: 99.5860% ( 8) 00:08:20.971 38313.354 - 38515.003: 99.7062% ( 9) 00:08:20.971 38515.003 - 38716.652: 99.8531% ( 11) 00:08:20.971 38716.652 - 38918.302: 99.9866% ( 10) 00:08:20.971 38918.302 - 39119.951: 100.0000% ( 1) 00:08:20.971 00:08:20.971 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:20.971 ============================================================================== 00:08:20.971 Range in us Cumulative IO count 00:08:20.971 6251.126 - 6276.332: 0.0134% ( 1) 00:08:20.971 6276.332 - 6301.538: 0.0401% ( 2) 00:08:20.971 6301.538 - 6326.745: 0.0668% ( 2) 00:08:20.971 6326.745 - 6351.951: 0.0801% ( 1) 00:08:20.971 6351.951 - 6377.157: 0.1068% ( 2) 00:08:20.971 6377.157 - 6402.363: 0.1335% ( 2) 00:08:20.971 6402.363 - 6427.569: 0.1870% ( 4) 00:08:20.971 6427.569 - 6452.775: 0.2404% ( 4) 00:08:20.971 6452.775 - 6503.188: 0.2938% ( 4) 00:08:20.971 6503.188 - 6553.600: 0.3472% ( 4) 00:08:20.971 6553.600 - 6604.012: 0.3739% ( 2) 00:08:20.971 6604.012 - 6654.425: 0.4274% ( 4) 00:08:20.971 6654.425 - 6704.837: 0.4674% ( 3) 00:08:20.971 6704.837 - 6755.249: 0.5208% ( 4) 00:08:20.971 6755.249 - 6805.662: 0.5743% ( 4) 00:08:20.971 6805.662 - 6856.074: 0.6277% ( 4) 00:08:20.971 6856.074 - 6906.486: 0.6811% ( 4) 00:08:20.971 6906.486 - 6956.898: 0.7212% ( 3) 00:08:20.971 6956.898 - 7007.311: 0.7746% ( 4) 00:08:20.971 7007.311 - 7057.723: 0.8280% ( 4) 00:08:20.971 7057.723 - 7108.135: 0.8547% ( 2) 00:08:20.971 12250.191 - 12300.603: 0.8948% ( 3) 00:08:20.971 12300.603 - 12351.015: 0.9215% ( 2) 00:08:20.971 12351.015 - 12401.428: 0.9482% ( 2) 00:08:20.971 12401.428 - 12451.840: 0.9749% ( 2) 00:08:20.971 12451.840 - 12502.252: 1.0283% ( 4) 00:08:20.971 12502.252 - 12552.665: 1.0550% ( 2) 00:08:20.971 12552.665 - 12603.077: 1.0817% ( 2) 00:08:20.971 12603.077 - 12653.489: 1.1084% ( 2) 00:08:20.971 12653.489 - 12703.902: 1.1351% ( 2) 00:08:20.971 12703.902 - 12754.314: 1.1619% ( 2) 00:08:20.972 12754.314 - 12804.726: 1.1752% ( 1) 00:08:20.972 12804.726 - 12855.138: 1.2019% ( 2) 00:08:20.972 12855.138 - 12905.551: 1.2286% ( 2) 00:08:20.972 12905.551 - 13006.375: 1.2954% ( 5) 00:08:20.972 13006.375 - 13107.200: 1.3488% ( 4) 00:08:20.972 13107.200 - 13208.025: 1.4022% ( 4) 00:08:20.972 13208.025 - 13308.849: 1.4690% ( 5) 00:08:20.972 13308.849 - 13409.674: 1.5358% ( 5) 00:08:20.972 13409.674 - 13510.498: 1.7094% ( 13) 00:08:20.972 13510.498 - 13611.323: 1.8830% ( 13) 00:08:20.972 13611.323 - 13712.148: 2.2035% ( 24) 00:08:20.972 13712.148 - 13812.972: 2.5374% ( 25) 00:08:20.972 13812.972 - 13913.797: 2.9380% ( 30) 00:08:20.972 13913.797 - 14014.622: 3.4188% ( 36) 00:08:20.972 14014.622 - 14115.446: 4.0598% ( 48) 00:08:20.972 14115.446 - 14216.271: 4.9279% ( 65) 00:08:20.972 14216.271 - 14317.095: 5.9161% ( 74) 00:08:20.972 14317.095 - 14417.920: 7.0246% ( 83) 00:08:20.972 14417.920 - 14518.745: 8.2933% ( 95) 00:08:20.972 14518.745 - 14619.569: 9.8558% ( 117) 00:08:20.972 14619.569 - 14720.394: 11.5652% ( 128) 00:08:20.972 14720.394 - 14821.218: 13.3280% ( 132) 00:08:20.972 14821.218 - 14922.043: 14.9973% ( 125) 00:08:20.972 14922.043 - 15022.868: 16.8002% ( 135) 00:08:20.972 15022.868 - 15123.692: 19.2041% ( 180) 00:08:20.972 15123.692 - 15224.517: 21.8082% ( 195) 00:08:20.972 15224.517 - 15325.342: 24.0919% ( 171) 00:08:20.972 15325.342 - 15426.166: 26.4690% ( 178) 00:08:20.972 15426.166 - 15526.991: 28.5123% ( 153) 00:08:20.972 15526.991 - 15627.815: 30.4354% ( 144) 00:08:20.972 15627.815 - 15728.640: 32.4653% ( 152) 00:08:20.972 15728.640 - 15829.465: 34.3349% ( 140) 00:08:20.972 15829.465 - 15930.289: 36.0710% ( 130) 00:08:20.972 15930.289 - 16031.114: 37.7804% ( 128) 00:08:20.972 16031.114 - 16131.938: 39.5833% ( 135) 00:08:20.972 16131.938 - 16232.763: 41.4797% ( 142) 00:08:20.972 16232.763 - 16333.588: 43.2292% ( 131) 00:08:20.972 16333.588 - 16434.412: 45.2057% ( 148) 00:08:20.972 16434.412 - 16535.237: 47.4493% ( 168) 00:08:20.972 16535.237 - 16636.062: 49.5459% ( 157) 00:08:20.972 16636.062 - 16736.886: 51.9498% ( 180) 00:08:20.972 16736.886 - 16837.711: 54.6074% ( 199) 00:08:20.972 16837.711 - 16938.535: 57.1715% ( 192) 00:08:20.972 16938.535 - 17039.360: 59.5085% ( 175) 00:08:20.972 17039.360 - 17140.185: 61.6052% ( 157) 00:08:20.972 17140.185 - 17241.009: 63.5016% ( 142) 00:08:20.972 17241.009 - 17341.834: 65.3980% ( 142) 00:08:20.972 17341.834 - 17442.658: 67.2409% ( 138) 00:08:20.972 17442.658 - 17543.483: 68.9103% ( 125) 00:08:20.972 17543.483 - 17644.308: 70.5662% ( 124) 00:08:20.972 17644.308 - 17745.132: 71.9284% ( 102) 00:08:20.972 17745.132 - 17845.957: 72.9567% ( 77) 00:08:20.972 17845.957 - 17946.782: 74.0251% ( 80) 00:08:20.972 17946.782 - 18047.606: 75.0134% ( 74) 00:08:20.972 18047.606 - 18148.431: 76.0016% ( 74) 00:08:20.972 18148.431 - 18249.255: 76.9498% ( 71) 00:08:20.972 18249.255 - 18350.080: 77.9514% ( 75) 00:08:20.972 18350.080 - 18450.905: 78.6458% ( 52) 00:08:20.972 18450.905 - 18551.729: 79.1266% ( 36) 00:08:20.972 18551.729 - 18652.554: 79.7810% ( 49) 00:08:20.972 18652.554 - 18753.378: 80.4354% ( 49) 00:08:20.972 18753.378 - 18854.203: 81.0897% ( 49) 00:08:20.972 18854.203 - 18955.028: 81.8376% ( 56) 00:08:20.972 18955.028 - 19055.852: 82.5454% ( 53) 00:08:20.972 19055.852 - 19156.677: 83.1998% ( 49) 00:08:20.972 19156.677 - 19257.502: 83.7740% ( 43) 00:08:20.972 19257.502 - 19358.326: 84.3884% ( 46) 00:08:20.972 19358.326 - 19459.151: 85.1896% ( 60) 00:08:20.972 19459.151 - 19559.975: 86.0443% ( 64) 00:08:20.972 19559.975 - 19660.800: 86.8723% ( 62) 00:08:20.972 19660.800 - 19761.625: 87.5534% ( 51) 00:08:20.972 19761.625 - 19862.449: 88.2479% ( 52) 00:08:20.972 19862.449 - 19963.274: 88.9290% ( 51) 00:08:20.972 19963.274 - 20064.098: 89.8371% ( 68) 00:08:20.972 20064.098 - 20164.923: 90.6651% ( 62) 00:08:20.972 20164.923 - 20265.748: 91.5598% ( 67) 00:08:20.972 20265.748 - 20366.572: 92.4012% ( 63) 00:08:20.972 20366.572 - 20467.397: 93.1490% ( 56) 00:08:20.972 20467.397 - 20568.222: 93.7366% ( 44) 00:08:20.972 20568.222 - 20669.046: 94.3777% ( 48) 00:08:20.972 20669.046 - 20769.871: 94.9519% ( 43) 00:08:20.972 20769.871 - 20870.695: 95.4594% ( 38) 00:08:20.972 20870.695 - 20971.520: 95.9135% ( 34) 00:08:20.972 20971.520 - 21072.345: 96.2073% ( 22) 00:08:20.972 21072.345 - 21173.169: 96.4476% ( 18) 00:08:20.972 21173.169 - 21273.994: 96.6613% ( 16) 00:08:20.972 21273.994 - 21374.818: 96.9418% ( 21) 00:08:20.972 21374.818 - 21475.643: 97.1955% ( 19) 00:08:20.972 21475.643 - 21576.468: 97.3558% ( 12) 00:08:20.972 21576.468 - 21677.292: 97.5160% ( 12) 00:08:20.972 21677.292 - 21778.117: 97.6763% ( 12) 00:08:20.972 21778.117 - 21878.942: 97.8365% ( 12) 00:08:20.972 21878.942 - 21979.766: 97.9834% ( 11) 00:08:20.972 21979.766 - 22080.591: 98.0769% ( 7) 00:08:20.972 22080.591 - 22181.415: 98.1571% ( 6) 00:08:20.972 22181.415 - 22282.240: 98.2238% ( 5) 00:08:20.972 22282.240 - 22383.065: 98.2772% ( 4) 00:08:20.972 22383.065 - 22483.889: 98.2906% ( 1) 00:08:20.972 28432.542 - 28634.191: 98.3307% ( 3) 00:08:20.972 28634.191 - 28835.840: 98.4375% ( 8) 00:08:20.972 28835.840 - 29037.489: 98.5577% ( 9) 00:08:20.972 29037.489 - 29239.138: 98.6779% ( 9) 00:08:20.972 29239.138 - 29440.788: 98.7981% ( 9) 00:08:20.972 29440.788 - 29642.437: 98.9049% ( 8) 00:08:20.972 29642.437 - 29844.086: 99.0251% ( 9) 00:08:20.972 29844.086 - 30045.735: 99.1453% ( 9) 00:08:20.972 36296.862 - 36498.511: 99.1854% ( 3) 00:08:20.972 36498.511 - 36700.160: 99.3189% ( 10) 00:08:20.972 36700.160 - 36901.809: 99.4391% ( 9) 00:08:20.972 36901.809 - 37103.458: 99.5326% ( 7) 00:08:20.972 37103.458 - 37305.108: 99.6528% ( 9) 00:08:20.972 37305.108 - 37506.757: 99.7730% ( 9) 00:08:20.972 37506.757 - 37708.406: 99.9065% ( 10) 00:08:20.972 37708.406 - 37910.055: 100.0000% ( 7) 00:08:20.972 00:08:20.972 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:20.972 ============================================================================== 00:08:20.972 Range in us Cumulative IO count 00:08:20.972 5368.911 - 5394.117: 0.0662% ( 5) 00:08:20.972 5394.117 - 5419.323: 0.1457% ( 6) 00:08:20.972 5419.323 - 5444.529: 0.1854% ( 3) 00:08:20.972 5444.529 - 5469.735: 0.2383% ( 4) 00:08:20.972 5520.148 - 5545.354: 0.2648% ( 2) 00:08:20.972 5545.354 - 5570.560: 0.2913% ( 2) 00:08:20.972 5570.560 - 5595.766: 0.3178% ( 2) 00:08:20.972 5595.766 - 5620.972: 0.3443% ( 2) 00:08:20.972 5620.972 - 5646.178: 0.3708% ( 2) 00:08:20.972 5646.178 - 5671.385: 0.3972% ( 2) 00:08:20.972 5671.385 - 5696.591: 0.4237% ( 2) 00:08:20.972 5696.591 - 5721.797: 0.4502% ( 2) 00:08:20.972 5721.797 - 5747.003: 0.4767% ( 2) 00:08:20.972 5747.003 - 5772.209: 0.5032% ( 2) 00:08:20.972 5772.209 - 5797.415: 0.5297% ( 2) 00:08:20.972 5797.415 - 5822.622: 0.5561% ( 2) 00:08:20.972 5822.622 - 5847.828: 0.5826% ( 2) 00:08:20.972 5847.828 - 5873.034: 0.5959% ( 1) 00:08:20.972 5873.034 - 5898.240: 0.6224% ( 2) 00:08:20.972 5898.240 - 5923.446: 0.6488% ( 2) 00:08:20.972 5923.446 - 5948.652: 0.6753% ( 2) 00:08:20.972 5948.652 - 5973.858: 0.7018% ( 2) 00:08:20.972 5973.858 - 5999.065: 0.7283% ( 2) 00:08:20.972 5999.065 - 6024.271: 0.7548% ( 2) 00:08:20.972 6024.271 - 6049.477: 0.7812% ( 2) 00:08:20.972 6049.477 - 6074.683: 0.7945% ( 1) 00:08:20.972 6074.683 - 6099.889: 0.8077% ( 1) 00:08:20.972 6099.889 - 6125.095: 0.8342% ( 2) 00:08:20.972 6125.095 - 6150.302: 0.8475% ( 1) 00:08:20.972 11342.769 - 11393.182: 0.8872% ( 3) 00:08:20.972 11393.182 - 11443.594: 0.9534% ( 5) 00:08:20.972 11443.594 - 11494.006: 0.9799% ( 2) 00:08:20.972 11494.006 - 11544.418: 1.0196% ( 3) 00:08:20.972 11544.418 - 11594.831: 1.0328% ( 1) 00:08:20.973 11594.831 - 11645.243: 1.0593% ( 2) 00:08:20.973 11645.243 - 11695.655: 1.0858% ( 2) 00:08:20.973 11695.655 - 11746.068: 1.1255% ( 3) 00:08:20.973 11746.068 - 11796.480: 1.1520% ( 2) 00:08:20.973 11796.480 - 11846.892: 1.1785% ( 2) 00:08:20.973 11846.892 - 11897.305: 1.2050% ( 2) 00:08:20.973 11897.305 - 11947.717: 1.2315% ( 2) 00:08:20.973 11947.717 - 11998.129: 1.2579% ( 2) 00:08:20.973 11998.129 - 12048.542: 1.2844% ( 2) 00:08:20.973 12048.542 - 12098.954: 1.3109% ( 2) 00:08:20.973 12098.954 - 12149.366: 1.3374% ( 2) 00:08:20.973 12149.366 - 12199.778: 1.3639% ( 2) 00:08:20.973 12199.778 - 12250.191: 1.4036% ( 3) 00:08:20.973 12250.191 - 12300.603: 1.4168% ( 1) 00:08:20.973 12300.603 - 12351.015: 1.4433% ( 2) 00:08:20.973 12351.015 - 12401.428: 1.4831% ( 3) 00:08:20.973 12401.428 - 12451.840: 1.5095% ( 2) 00:08:20.973 12451.840 - 12502.252: 1.5360% ( 2) 00:08:20.973 12502.252 - 12552.665: 1.5625% ( 2) 00:08:20.973 12552.665 - 12603.077: 1.5890% ( 2) 00:08:20.973 12603.077 - 12653.489: 1.6155% ( 2) 00:08:20.973 12653.489 - 12703.902: 1.6419% ( 2) 00:08:20.973 12703.902 - 12754.314: 1.6684% ( 2) 00:08:20.973 12754.314 - 12804.726: 1.6949% ( 2) 00:08:20.973 13409.674 - 13510.498: 1.7479% ( 4) 00:08:20.973 13510.498 - 13611.323: 1.8538% ( 8) 00:08:20.973 13611.323 - 13712.148: 1.9597% ( 8) 00:08:20.973 13712.148 - 13812.972: 2.1319% ( 13) 00:08:20.973 13812.972 - 13913.797: 2.5556% ( 32) 00:08:20.973 13913.797 - 14014.622: 3.2442% ( 52) 00:08:20.973 14014.622 - 14115.446: 3.9592% ( 54) 00:08:20.973 14115.446 - 14216.271: 4.8067% ( 64) 00:08:20.973 14216.271 - 14317.095: 5.6541% ( 64) 00:08:20.973 14317.095 - 14417.920: 6.7002% ( 79) 00:08:20.973 14417.920 - 14518.745: 7.9582% ( 95) 00:08:20.973 14518.745 - 14619.569: 9.6796% ( 130) 00:08:20.973 14619.569 - 14720.394: 11.4672% ( 135) 00:08:20.973 14720.394 - 14821.218: 13.1091% ( 124) 00:08:20.973 14821.218 - 14922.043: 14.9894% ( 142) 00:08:20.973 14922.043 - 15022.868: 16.9756% ( 150) 00:08:20.973 15022.868 - 15123.692: 18.9089% ( 146) 00:08:20.973 15123.692 - 15224.517: 20.8554% ( 147) 00:08:20.973 15224.517 - 15325.342: 23.0535% ( 166) 00:08:20.973 15325.342 - 15426.166: 25.5561% ( 189) 00:08:20.973 15426.166 - 15526.991: 27.7940% ( 169) 00:08:20.973 15526.991 - 15627.815: 30.0715% ( 172) 00:08:20.973 15627.815 - 15728.640: 32.0445% ( 149) 00:08:20.973 15728.640 - 15829.465: 34.1367% ( 158) 00:08:20.973 15829.465 - 15930.289: 35.9507% ( 137) 00:08:20.973 15930.289 - 16031.114: 37.9899% ( 154) 00:08:20.973 16031.114 - 16131.938: 40.3337% ( 177) 00:08:20.973 16131.938 - 16232.763: 42.4523% ( 160) 00:08:20.973 16232.763 - 16333.588: 44.7961% ( 177) 00:08:20.973 16333.588 - 16434.412: 47.2722% ( 187) 00:08:20.973 16434.412 - 16535.237: 49.6822% ( 182) 00:08:20.973 16535.237 - 16636.062: 52.0657% ( 180) 00:08:20.973 16636.062 - 16736.886: 54.3035% ( 169) 00:08:20.973 16736.886 - 16837.711: 56.7002% ( 181) 00:08:20.973 16837.711 - 16938.535: 58.7129% ( 152) 00:08:20.973 16938.535 - 17039.360: 60.7521% ( 154) 00:08:20.973 17039.360 - 17140.185: 63.1886% ( 184) 00:08:20.973 17140.185 - 17241.009: 65.3072% ( 160) 00:08:20.973 17241.009 - 17341.834: 67.2140% ( 144) 00:08:20.973 17341.834 - 17442.658: 68.7897% ( 119) 00:08:20.973 17442.658 - 17543.483: 70.3257% ( 116) 00:08:20.973 17543.483 - 17644.308: 71.4910% ( 88) 00:08:20.973 17644.308 - 17745.132: 72.6033% ( 84) 00:08:20.973 17745.132 - 17845.957: 73.5699% ( 73) 00:08:20.973 17845.957 - 17946.782: 74.4571% ( 67) 00:08:20.973 17946.782 - 18047.606: 75.2781% ( 62) 00:08:20.973 18047.606 - 18148.431: 76.2712% ( 75) 00:08:20.973 18148.431 - 18249.255: 76.9333% ( 50) 00:08:20.973 18249.255 - 18350.080: 77.7013% ( 58) 00:08:20.973 18350.080 - 18450.905: 78.3633% ( 50) 00:08:20.973 18450.905 - 18551.729: 78.9857% ( 47) 00:08:20.973 18551.729 - 18652.554: 79.6345% ( 49) 00:08:20.973 18652.554 - 18753.378: 80.1510% ( 39) 00:08:20.973 18753.378 - 18854.203: 80.8263% ( 51) 00:08:20.973 18854.203 - 18955.028: 81.5281% ( 53) 00:08:20.973 18955.028 - 19055.852: 82.1769% ( 49) 00:08:20.973 19055.852 - 19156.677: 83.0111% ( 63) 00:08:20.973 19156.677 - 19257.502: 83.8189% ( 61) 00:08:20.973 19257.502 - 19358.326: 84.5207% ( 53) 00:08:20.973 19358.326 - 19459.151: 85.3284% ( 61) 00:08:20.973 19459.151 - 19559.975: 86.0567% ( 55) 00:08:20.973 19559.975 - 19660.800: 86.7452% ( 52) 00:08:20.973 19660.800 - 19761.625: 87.5397% ( 60) 00:08:20.973 19761.625 - 19862.449: 88.2812% ( 56) 00:08:20.973 19862.449 - 19963.274: 89.0360% ( 57) 00:08:20.973 19963.274 - 20064.098: 89.8173% ( 59) 00:08:20.973 20064.098 - 20164.923: 90.5456% ( 55) 00:08:20.973 20164.923 - 20265.748: 91.3268% ( 59) 00:08:20.973 20265.748 - 20366.572: 91.9756% ( 49) 00:08:20.973 20366.572 - 20467.397: 92.7966% ( 62) 00:08:20.973 20467.397 - 20568.222: 93.5249% ( 55) 00:08:20.973 20568.222 - 20669.046: 94.1737% ( 49) 00:08:20.973 20669.046 - 20769.871: 94.7961% ( 47) 00:08:20.973 20769.871 - 20870.695: 95.5244% ( 55) 00:08:20.973 20870.695 - 20971.520: 96.1070% ( 44) 00:08:20.973 20971.520 - 21072.345: 96.5969% ( 37) 00:08:20.973 21072.345 - 21173.169: 97.0604% ( 35) 00:08:20.973 21173.169 - 21273.994: 97.4179% ( 27) 00:08:20.973 21273.994 - 21374.818: 97.7489% ( 25) 00:08:20.973 21374.818 - 21475.643: 98.0270% ( 21) 00:08:20.973 21475.643 - 21576.468: 98.2654% ( 18) 00:08:20.973 21576.468 - 21677.292: 98.4640% ( 15) 00:08:20.973 21677.292 - 21778.117: 98.7023% ( 18) 00:08:20.973 21778.117 - 21878.942: 98.8877% ( 14) 00:08:20.973 21878.942 - 21979.766: 98.9936% ( 8) 00:08:20.973 21979.766 - 22080.591: 99.0466% ( 4) 00:08:20.973 22080.591 - 22181.415: 99.0996% ( 4) 00:08:20.973 22181.415 - 22282.240: 99.1393% ( 3) 00:08:20.973 22282.240 - 22383.065: 99.1525% ( 1) 00:08:20.973 28432.542 - 28634.191: 99.1658% ( 1) 00:08:20.973 28634.191 - 28835.840: 99.2585% ( 7) 00:08:20.973 28835.840 - 29037.489: 99.3776% ( 9) 00:08:20.973 29037.489 - 29239.138: 99.4571% ( 6) 00:08:20.973 29239.138 - 29440.788: 99.5895% ( 10) 00:08:20.973 29440.788 - 29642.437: 99.7087% ( 9) 00:08:20.973 29642.437 - 29844.086: 99.8411% ( 10) 00:08:20.973 29844.086 - 30045.735: 99.9603% ( 9) 00:08:20.973 30045.735 - 30247.385: 100.0000% ( 3) 00:08:20.973 00:08:20.973 03:17:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:21.919 Initializing NVMe Controllers 00:08:21.919 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.919 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.919 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.919 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.919 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:21.919 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:21.919 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:21.919 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:21.919 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:21.919 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:21.919 Initialization complete. Launching workers. 00:08:21.919 ======================================================== 00:08:21.919 Latency(us) 00:08:21.919 Device Information : IOPS MiB/s Average min max 00:08:21.919 PCIE (0000:00:13.0) NSID 1 from core 0: 7891.38 92.48 16240.50 11765.14 35645.22 00:08:21.919 PCIE (0000:00:10.0) NSID 1 from core 0: 7891.38 92.48 16228.53 10377.57 35455.80 00:08:21.919 PCIE (0000:00:11.0) NSID 1 from core 0: 7891.38 92.48 16211.46 10260.21 34620.96 00:08:21.919 PCIE (0000:00:12.0) NSID 1 from core 0: 7891.38 92.48 16193.81 7803.89 35702.58 00:08:21.919 PCIE (0000:00:12.0) NSID 2 from core 0: 7891.38 92.48 16176.70 6955.32 35815.50 00:08:21.919 PCIE (0000:00:12.0) NSID 3 from core 0: 7955.02 93.22 16030.40 6216.91 27112.33 00:08:21.919 ======================================================== 00:08:21.919 Total : 47411.93 555.61 16180.03 6216.91 35815.50 00:08:21.919 00:08:21.919 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:21.919 ================================================================================= 00:08:21.919 1.00000% : 13107.200us 00:08:21.919 10.00000% : 14417.920us 00:08:21.919 25.00000% : 15022.868us 00:08:21.919 50.00000% : 15627.815us 00:08:21.919 75.00000% : 16736.886us 00:08:21.919 90.00000% : 18955.028us 00:08:21.919 95.00000% : 20064.098us 00:08:21.919 98.00000% : 22584.714us 00:08:21.919 99.00000% : 27020.997us 00:08:21.919 99.50000% : 34885.317us 00:08:21.919 99.90000% : 35691.914us 00:08:21.919 99.99000% : 35691.914us 00:08:21.919 99.99900% : 35691.914us 00:08:21.919 99.99990% : 35691.914us 00:08:21.919 99.99999% : 35691.914us 00:08:21.919 00:08:21.919 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:21.919 ================================================================================= 00:08:21.919 1.00000% : 13308.849us 00:08:21.919 10.00000% : 14317.095us 00:08:21.919 25.00000% : 14922.043us 00:08:21.919 50.00000% : 15627.815us 00:08:21.919 75.00000% : 16736.886us 00:08:21.919 90.00000% : 19156.677us 00:08:21.919 95.00000% : 20164.923us 00:08:21.919 98.00000% : 23391.311us 00:08:21.919 99.00000% : 26819.348us 00:08:21.919 99.50000% : 34683.668us 00:08:21.919 99.90000% : 35288.615us 00:08:21.919 99.99000% : 35490.265us 00:08:21.919 99.99900% : 35490.265us 00:08:21.919 99.99990% : 35490.265us 00:08:21.919 99.99999% : 35490.265us 00:08:21.919 00:08:21.919 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:21.919 ================================================================================= 00:08:21.919 1.00000% : 13510.498us 00:08:21.919 10.00000% : 14417.920us 00:08:21.919 25.00000% : 15022.868us 00:08:21.919 50.00000% : 15526.991us 00:08:21.919 75.00000% : 16837.711us 00:08:21.919 90.00000% : 18854.203us 00:08:21.919 95.00000% : 19963.274us 00:08:21.919 98.00000% : 24298.732us 00:08:21.919 99.00000% : 26617.698us 00:08:21.919 99.50000% : 33877.071us 00:08:21.919 99.90000% : 34683.668us 00:08:21.919 99.99000% : 34683.668us 00:08:21.919 99.99900% : 34683.668us 00:08:21.919 99.99990% : 34683.668us 00:08:21.919 99.99999% : 34683.668us 00:08:21.919 00:08:21.919 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:21.919 ================================================================================= 00:08:21.919 1.00000% : 13208.025us 00:08:21.919 10.00000% : 14417.920us 00:08:21.919 25.00000% : 14922.043us 00:08:21.919 50.00000% : 15526.991us 00:08:21.919 75.00000% : 16736.886us 00:08:21.919 90.00000% : 19055.852us 00:08:21.919 95.00000% : 20064.098us 00:08:21.919 98.00000% : 23592.960us 00:08:21.919 99.00000% : 27020.997us 00:08:21.919 99.50000% : 35086.966us 00:08:21.919 99.90000% : 35691.914us 00:08:21.919 99.99000% : 35893.563us 00:08:21.919 99.99900% : 35893.563us 00:08:21.919 99.99990% : 35893.563us 00:08:21.919 99.99999% : 35893.563us 00:08:21.919 00:08:21.919 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:21.919 ================================================================================= 00:08:21.920 1.00000% : 13208.025us 00:08:21.920 10.00000% : 14317.095us 00:08:21.920 25.00000% : 14922.043us 00:08:21.920 50.00000% : 15526.991us 00:08:21.920 75.00000% : 16636.062us 00:08:21.920 90.00000% : 19156.677us 00:08:21.920 95.00000% : 19862.449us 00:08:21.920 98.00000% : 23492.135us 00:08:21.920 99.00000% : 26617.698us 00:08:21.920 99.50000% : 35086.966us 00:08:21.920 99.90000% : 35691.914us 00:08:21.920 99.99000% : 35893.563us 00:08:21.920 99.99900% : 35893.563us 00:08:21.920 99.99990% : 35893.563us 00:08:21.920 99.99999% : 35893.563us 00:08:21.920 00:08:21.920 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:21.920 ================================================================================= 00:08:21.920 1.00000% : 13208.025us 00:08:21.920 10.00000% : 14317.095us 00:08:21.920 25.00000% : 15022.868us 00:08:21.920 50.00000% : 15627.815us 00:08:21.920 75.00000% : 16636.062us 00:08:21.920 90.00000% : 18753.378us 00:08:21.920 95.00000% : 19862.449us 00:08:21.920 98.00000% : 21475.643us 00:08:21.920 99.00000% : 23391.311us 00:08:21.920 99.50000% : 26416.049us 00:08:21.920 99.90000% : 27020.997us 00:08:21.920 99.99000% : 27222.646us 00:08:21.920 99.99900% : 27222.646us 00:08:21.920 99.99990% : 27222.646us 00:08:21.920 99.99999% : 27222.646us 00:08:21.920 00:08:21.920 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:21.920 ============================================================================== 00:08:21.920 Range in us Cumulative IO count 00:08:21.920 11746.068 - 11796.480: 0.0378% ( 3) 00:08:21.920 11796.480 - 11846.892: 0.0756% ( 3) 00:08:21.920 11846.892 - 11897.305: 0.1134% ( 3) 00:08:21.920 11897.305 - 11947.717: 0.1764% ( 5) 00:08:21.920 11947.717 - 11998.129: 0.2394% ( 5) 00:08:21.920 11998.129 - 12048.542: 0.3276% ( 7) 00:08:21.920 12048.542 - 12098.954: 0.4410% ( 9) 00:08:21.920 12098.954 - 12149.366: 0.5040% ( 5) 00:08:21.920 12149.366 - 12199.778: 0.6300% ( 10) 00:08:21.920 12199.778 - 12250.191: 0.6930% ( 5) 00:08:21.920 12250.191 - 12300.603: 0.7812% ( 7) 00:08:21.920 12300.603 - 12351.015: 0.8065% ( 2) 00:08:21.920 12754.314 - 12804.726: 0.8191% ( 1) 00:08:21.920 12804.726 - 12855.138: 0.8443% ( 2) 00:08:21.920 12855.138 - 12905.551: 0.8695% ( 2) 00:08:21.920 12905.551 - 13006.375: 0.9703% ( 8) 00:08:21.920 13006.375 - 13107.200: 1.0585% ( 7) 00:08:21.920 13107.200 - 13208.025: 1.2223% ( 13) 00:08:21.920 13208.025 - 13308.849: 1.4491% ( 18) 00:08:21.920 13308.849 - 13409.674: 1.6381% ( 15) 00:08:21.920 13409.674 - 13510.498: 1.8523% ( 17) 00:08:21.920 13510.498 - 13611.323: 2.1169% ( 21) 00:08:21.920 13611.323 - 13712.148: 2.5076% ( 31) 00:08:21.920 13712.148 - 13812.972: 2.9234% ( 33) 00:08:21.920 13812.972 - 13913.797: 3.6164% ( 55) 00:08:21.920 13913.797 - 14014.622: 4.5867% ( 77) 00:08:21.920 14014.622 - 14115.446: 5.6074% ( 81) 00:08:21.920 14115.446 - 14216.271: 7.0312% ( 113) 00:08:21.920 14216.271 - 14317.095: 8.5055% ( 117) 00:08:21.920 14317.095 - 14417.920: 10.3075% ( 143) 00:08:21.920 14417.920 - 14518.745: 12.7520% ( 194) 00:08:21.920 14518.745 - 14619.569: 14.9446% ( 174) 00:08:21.920 14619.569 - 14720.394: 18.0822% ( 249) 00:08:21.920 14720.394 - 14821.218: 21.5096% ( 272) 00:08:21.920 14821.218 - 14922.043: 24.5338% ( 240) 00:08:21.920 14922.043 - 15022.868: 27.5580% ( 240) 00:08:21.920 15022.868 - 15123.692: 31.4768% ( 311) 00:08:21.920 15123.692 - 15224.517: 36.1895% ( 374) 00:08:21.920 15224.517 - 15325.342: 40.4486% ( 338) 00:08:21.920 15325.342 - 15426.166: 44.8715% ( 351) 00:08:21.920 15426.166 - 15526.991: 49.2061% ( 344) 00:08:21.920 15526.991 - 15627.815: 53.1628% ( 314) 00:08:21.920 15627.815 - 15728.640: 56.6532% ( 277) 00:08:21.920 15728.640 - 15829.465: 59.5640% ( 231) 00:08:21.920 15829.465 - 15930.289: 62.3488% ( 221) 00:08:21.920 15930.289 - 16031.114: 64.5539% ( 175) 00:08:21.920 16031.114 - 16131.938: 66.9355% ( 189) 00:08:21.920 16131.938 - 16232.763: 68.7374% ( 143) 00:08:21.920 16232.763 - 16333.588: 70.1109% ( 109) 00:08:21.920 16333.588 - 16434.412: 71.4592% ( 107) 00:08:21.920 16434.412 - 16535.237: 73.1729% ( 136) 00:08:21.920 16535.237 - 16636.062: 74.2566% ( 86) 00:08:21.920 16636.062 - 16736.886: 75.1890% ( 74) 00:08:21.920 16736.886 - 16837.711: 76.1971% ( 80) 00:08:21.920 16837.711 - 16938.535: 77.5580% ( 108) 00:08:21.920 16938.535 - 17039.360: 79.1583% ( 127) 00:08:21.920 17039.360 - 17140.185: 79.9521% ( 63) 00:08:21.920 17140.185 - 17241.009: 80.6200% ( 53) 00:08:21.920 17241.009 - 17341.834: 81.2626% ( 51) 00:08:21.920 17341.834 - 17442.658: 81.6406% ( 30) 00:08:21.920 17442.658 - 17543.483: 81.9934% ( 28) 00:08:21.920 17543.483 - 17644.308: 82.6739% ( 54) 00:08:21.920 17644.308 - 17745.132: 83.3543% ( 54) 00:08:21.920 17745.132 - 17845.957: 83.9970% ( 51) 00:08:21.920 17845.957 - 17946.782: 84.5766% ( 46) 00:08:21.920 17946.782 - 18047.606: 85.5091% ( 74) 00:08:21.920 18047.606 - 18148.431: 86.0509% ( 43) 00:08:21.920 18148.431 - 18249.255: 86.5297% ( 38) 00:08:21.920 18249.255 - 18350.080: 87.1346% ( 48) 00:08:21.920 18350.080 - 18450.905: 87.5126% ( 30) 00:08:21.920 18450.905 - 18551.729: 87.9158% ( 32) 00:08:21.920 18551.729 - 18652.554: 88.2056% ( 23) 00:08:21.920 18652.554 - 18753.378: 89.0877% ( 70) 00:08:21.920 18753.378 - 18854.203: 89.5917% ( 40) 00:08:21.920 18854.203 - 18955.028: 90.0580% ( 37) 00:08:21.920 18955.028 - 19055.852: 90.5368% ( 38) 00:08:21.920 19055.852 - 19156.677: 91.2424% ( 56) 00:08:21.920 19156.677 - 19257.502: 91.6961% ( 36) 00:08:21.920 19257.502 - 19358.326: 92.2001% ( 40) 00:08:21.920 19358.326 - 19459.151: 92.6033% ( 32) 00:08:21.920 19459.151 - 19559.975: 93.0318% ( 34) 00:08:21.920 19559.975 - 19660.800: 93.4980% ( 37) 00:08:21.920 19660.800 - 19761.625: 94.1532% ( 52) 00:08:21.920 19761.625 - 19862.449: 94.4808% ( 26) 00:08:21.920 19862.449 - 19963.274: 94.8715% ( 31) 00:08:21.920 19963.274 - 20064.098: 95.0227% ( 12) 00:08:21.920 20064.098 - 20164.923: 95.1865% ( 13) 00:08:21.920 20164.923 - 20265.748: 95.4133% ( 18) 00:08:21.920 20265.748 - 20366.572: 95.5897% ( 14) 00:08:21.920 20366.572 - 20467.397: 95.7535% ( 13) 00:08:21.920 20467.397 - 20568.222: 96.1064% ( 28) 00:08:21.920 20568.222 - 20669.046: 96.2450% ( 11) 00:08:21.920 20669.046 - 20769.871: 96.3584% ( 9) 00:08:21.920 20769.871 - 20870.695: 96.4844% ( 10) 00:08:21.920 20870.695 - 20971.520: 96.5474% ( 5) 00:08:21.920 20971.520 - 21072.345: 96.6104% ( 5) 00:08:21.920 21072.345 - 21173.169: 96.6734% ( 5) 00:08:21.920 21173.169 - 21273.994: 96.7238% ( 4) 00:08:21.920 21273.994 - 21374.818: 96.7994% ( 6) 00:08:21.920 21374.818 - 21475.643: 96.8750% ( 6) 00:08:21.920 21475.643 - 21576.468: 96.9506% ( 6) 00:08:21.920 21576.468 - 21677.292: 97.0388% ( 7) 00:08:21.920 21677.292 - 21778.117: 97.1522% ( 9) 00:08:21.920 21778.117 - 21878.942: 97.3034% ( 12) 00:08:21.920 21878.942 - 21979.766: 97.4420% ( 11) 00:08:21.920 21979.766 - 22080.591: 97.5806% ( 11) 00:08:21.920 22080.591 - 22181.415: 97.7193% ( 11) 00:08:21.920 22181.415 - 22282.240: 97.8327% ( 9) 00:08:21.920 22282.240 - 22383.065: 97.9335% ( 8) 00:08:21.920 22383.065 - 22483.889: 97.9965% ( 5) 00:08:21.920 22483.889 - 22584.714: 98.0469% ( 4) 00:08:21.920 22584.714 - 22685.538: 98.0973% ( 4) 00:08:21.920 22685.538 - 22786.363: 98.1603% ( 5) 00:08:21.920 22786.363 - 22887.188: 98.2233% ( 5) 00:08:21.920 22887.188 - 22988.012: 98.2989% ( 6) 00:08:21.920 22988.012 - 23088.837: 98.3619% ( 5) 00:08:21.920 23088.837 - 23189.662: 98.3871% ( 2) 00:08:21.920 26214.400 - 26416.049: 98.6013% ( 17) 00:08:21.920 26416.049 - 26617.698: 98.7525% ( 12) 00:08:21.920 26617.698 - 26819.348: 98.8785% ( 10) 00:08:21.920 26819.348 - 27020.997: 99.0045% ( 10) 00:08:21.920 27020.997 - 27222.646: 99.1305% ( 10) 00:08:21.920 27222.646 - 27424.295: 99.1935% ( 5) 00:08:21.920 33675.422 - 33877.071: 99.2061% ( 1) 00:08:21.920 33877.071 - 34078.720: 99.2314% ( 2) 00:08:21.920 34078.720 - 34280.369: 99.3070% ( 6) 00:08:21.920 34482.018 - 34683.668: 99.4204% ( 9) 00:08:21.920 34683.668 - 34885.317: 99.5338% ( 9) 00:08:21.920 34885.317 - 35086.966: 99.6472% ( 9) 00:08:21.920 35086.966 - 35288.615: 99.7480% ( 8) 00:08:21.920 35288.615 - 35490.265: 99.8866% ( 11) 00:08:21.920 35490.265 - 35691.914: 100.0000% ( 9) 00:08:21.920 00:08:21.920 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:21.920 ============================================================================== 00:08:21.920 Range in us Cumulative IO count 00:08:21.920 10334.523 - 10384.935: 0.0252% ( 2) 00:08:21.920 10384.935 - 10435.348: 0.0378% ( 1) 00:08:21.920 10435.348 - 10485.760: 0.0882% ( 4) 00:08:21.920 10485.760 - 10536.172: 0.1260% ( 3) 00:08:21.920 10536.172 - 10586.585: 0.1890% ( 5) 00:08:21.920 10586.585 - 10636.997: 0.2394% ( 4) 00:08:21.920 10636.997 - 10687.409: 0.3024% ( 5) 00:08:21.920 10687.409 - 10737.822: 0.3276% ( 2) 00:08:21.920 10737.822 - 10788.234: 0.3780% ( 4) 00:08:21.920 10788.234 - 10838.646: 0.4536% ( 6) 00:08:21.920 10838.646 - 10889.058: 0.4788% ( 2) 00:08:21.920 10889.058 - 10939.471: 0.5040% ( 2) 00:08:21.920 10939.471 - 10989.883: 0.5418% ( 3) 00:08:21.920 11040.295 - 11090.708: 0.5544% ( 1) 00:08:21.920 11090.708 - 11141.120: 0.6804% ( 10) 00:08:21.920 11141.120 - 11191.532: 0.6930% ( 1) 00:08:21.920 11191.532 - 11241.945: 0.7182% ( 2) 00:08:21.920 11241.945 - 11292.357: 0.7434% ( 2) 00:08:21.920 11292.357 - 11342.769: 0.7812% ( 3) 00:08:21.920 11342.769 - 11393.182: 0.8065% ( 2) 00:08:21.920 12855.138 - 12905.551: 0.8191% ( 1) 00:08:21.920 13006.375 - 13107.200: 0.8317% ( 1) 00:08:21.921 13107.200 - 13208.025: 0.9199% ( 7) 00:08:21.921 13208.025 - 13308.849: 1.2097% ( 23) 00:08:21.921 13308.849 - 13409.674: 1.6381% ( 34) 00:08:21.921 13409.674 - 13510.498: 2.0035% ( 29) 00:08:21.921 13510.498 - 13611.323: 2.2681% ( 21) 00:08:21.921 13611.323 - 13712.148: 3.1250% ( 68) 00:08:21.921 13712.148 - 13812.972: 4.1205% ( 79) 00:08:21.921 13812.972 - 13913.797: 5.2041% ( 86) 00:08:21.921 13913.797 - 14014.622: 6.4264% ( 97) 00:08:21.921 14014.622 - 14115.446: 7.7999% ( 109) 00:08:21.921 14115.446 - 14216.271: 9.3246% ( 121) 00:08:21.921 14216.271 - 14317.095: 11.1895% ( 148) 00:08:21.921 14317.095 - 14417.920: 13.1552% ( 156) 00:08:21.921 14417.920 - 14518.745: 15.7510% ( 206) 00:08:21.921 14518.745 - 14619.569: 18.3216% ( 204) 00:08:21.921 14619.569 - 14720.394: 21.2072% ( 229) 00:08:21.921 14720.394 - 14821.218: 24.5968% ( 269) 00:08:21.921 14821.218 - 14922.043: 27.9738% ( 268) 00:08:21.921 14922.043 - 15022.868: 31.7162% ( 297) 00:08:21.921 15022.868 - 15123.692: 35.2697% ( 282) 00:08:21.921 15123.692 - 15224.517: 38.2686% ( 238) 00:08:21.921 15224.517 - 15325.342: 41.4567% ( 253) 00:08:21.921 15325.342 - 15426.166: 44.7329% ( 260) 00:08:21.921 15426.166 - 15526.991: 47.7949% ( 243) 00:08:21.921 15526.991 - 15627.815: 51.2979% ( 278) 00:08:21.921 15627.815 - 15728.640: 54.2465% ( 234) 00:08:21.921 15728.640 - 15829.465: 56.7792% ( 201) 00:08:21.921 15829.465 - 15930.289: 58.9340% ( 171) 00:08:21.921 15930.289 - 16031.114: 61.4163% ( 197) 00:08:21.921 16031.114 - 16131.938: 63.7223% ( 183) 00:08:21.921 16131.938 - 16232.763: 66.1290% ( 191) 00:08:21.921 16232.763 - 16333.588: 68.2460% ( 168) 00:08:21.921 16333.588 - 16434.412: 70.4637% ( 176) 00:08:21.921 16434.412 - 16535.237: 72.3412% ( 149) 00:08:21.921 16535.237 - 16636.062: 74.0801% ( 138) 00:08:21.921 16636.062 - 16736.886: 75.2646% ( 94) 00:08:21.921 16736.886 - 16837.711: 76.3861% ( 89) 00:08:21.921 16837.711 - 16938.535: 77.5076% ( 89) 00:08:21.921 16938.535 - 17039.360: 78.3014% ( 63) 00:08:21.921 17039.360 - 17140.185: 79.3347% ( 82) 00:08:21.921 17140.185 - 17241.009: 80.2923% ( 76) 00:08:21.921 17241.009 - 17341.834: 81.2878% ( 79) 00:08:21.921 17341.834 - 17442.658: 81.8674% ( 46) 00:08:21.921 17442.658 - 17543.483: 82.4975% ( 50) 00:08:21.921 17543.483 - 17644.308: 83.4677% ( 77) 00:08:21.921 17644.308 - 17745.132: 84.0600% ( 47) 00:08:21.921 17745.132 - 17845.957: 84.4632% ( 32) 00:08:21.921 17845.957 - 17946.782: 84.8664% ( 32) 00:08:21.921 17946.782 - 18047.606: 85.2823% ( 33) 00:08:21.921 18047.606 - 18148.431: 85.7359% ( 36) 00:08:21.921 18148.431 - 18249.255: 86.2525% ( 41) 00:08:21.921 18249.255 - 18350.080: 86.8070% ( 44) 00:08:21.921 18350.080 - 18450.905: 87.5126% ( 56) 00:08:21.921 18450.905 - 18551.729: 88.1426% ( 50) 00:08:21.921 18551.729 - 18652.554: 88.5333% ( 31) 00:08:21.921 18652.554 - 18753.378: 89.0877% ( 44) 00:08:21.921 18753.378 - 18854.203: 89.3523% ( 21) 00:08:21.921 18854.203 - 18955.028: 89.7177% ( 29) 00:08:21.921 18955.028 - 19055.852: 89.9698% ( 20) 00:08:21.921 19055.852 - 19156.677: 90.4486% ( 38) 00:08:21.921 19156.677 - 19257.502: 90.9652% ( 41) 00:08:21.921 19257.502 - 19358.326: 91.2298% ( 21) 00:08:21.921 19358.326 - 19459.151: 91.7969% ( 45) 00:08:21.921 19459.151 - 19559.975: 92.3513% ( 44) 00:08:21.921 19559.975 - 19660.800: 92.7419% ( 31) 00:08:21.921 19660.800 - 19761.625: 93.1578% ( 33) 00:08:21.921 19761.625 - 19862.449: 93.5610% ( 32) 00:08:21.921 19862.449 - 19963.274: 94.0776% ( 41) 00:08:21.921 19963.274 - 20064.098: 94.5312% ( 36) 00:08:21.921 20064.098 - 20164.923: 95.1235% ( 47) 00:08:21.921 20164.923 - 20265.748: 95.4385% ( 25) 00:08:21.921 20265.748 - 20366.572: 95.6275% ( 15) 00:08:21.921 20366.572 - 20467.397: 95.8417% ( 17) 00:08:21.921 20467.397 - 20568.222: 96.1190% ( 22) 00:08:21.921 20568.222 - 20669.046: 96.2828% ( 13) 00:08:21.921 20669.046 - 20769.871: 96.4340% ( 12) 00:08:21.921 20769.871 - 20870.695: 96.5978% ( 13) 00:08:21.921 20870.695 - 20971.520: 96.6356% ( 3) 00:08:21.921 20971.520 - 21072.345: 96.6482% ( 1) 00:08:21.921 21072.345 - 21173.169: 96.6986% ( 4) 00:08:21.921 21173.169 - 21273.994: 96.7490% ( 4) 00:08:21.921 21273.994 - 21374.818: 96.7742% ( 2) 00:08:21.921 21878.942 - 21979.766: 96.7868% ( 1) 00:08:21.921 22080.591 - 22181.415: 96.7994% ( 1) 00:08:21.921 22181.415 - 22282.240: 96.9254% ( 10) 00:08:21.921 22282.240 - 22383.065: 97.1396% ( 17) 00:08:21.921 22383.065 - 22483.889: 97.2026% ( 5) 00:08:21.921 22483.889 - 22584.714: 97.2782% ( 6) 00:08:21.921 22584.714 - 22685.538: 97.4546% ( 14) 00:08:21.921 22685.538 - 22786.363: 97.6310% ( 14) 00:08:21.921 22786.363 - 22887.188: 97.6436% ( 1) 00:08:21.921 22887.188 - 22988.012: 97.6941% ( 4) 00:08:21.921 22988.012 - 23088.837: 97.7949% ( 8) 00:08:21.921 23088.837 - 23189.662: 97.9209% ( 10) 00:08:21.921 23189.662 - 23290.486: 97.9839% ( 5) 00:08:21.921 23290.486 - 23391.311: 98.0721% ( 7) 00:08:21.921 23391.311 - 23492.135: 98.1729% ( 8) 00:08:21.921 23492.135 - 23592.960: 98.2611% ( 7) 00:08:21.921 23592.960 - 23693.785: 98.3115% ( 4) 00:08:21.921 23693.785 - 23794.609: 98.3619% ( 4) 00:08:21.921 23794.609 - 23895.434: 98.3871% ( 2) 00:08:21.921 25609.452 - 25710.277: 98.3997% ( 1) 00:08:21.921 25710.277 - 25811.102: 98.4501% ( 4) 00:08:21.921 25811.102 - 26012.751: 98.5509% ( 8) 00:08:21.921 26012.751 - 26214.400: 98.6517% ( 8) 00:08:21.921 26214.400 - 26416.049: 98.7777% ( 10) 00:08:21.921 26416.049 - 26617.698: 98.8911% ( 9) 00:08:21.921 26617.698 - 26819.348: 99.0423% ( 12) 00:08:21.921 26819.348 - 27020.997: 99.1179% ( 6) 00:08:21.921 27020.997 - 27222.646: 99.1935% ( 6) 00:08:21.921 33473.772 - 33675.422: 99.2188% ( 2) 00:08:21.921 33675.422 - 33877.071: 99.2314% ( 1) 00:08:21.921 34078.720 - 34280.369: 99.3448% ( 9) 00:08:21.921 34280.369 - 34482.018: 99.4330% ( 7) 00:08:21.921 34482.018 - 34683.668: 99.5338% ( 8) 00:08:21.921 34683.668 - 34885.317: 99.6724% ( 11) 00:08:21.921 34885.317 - 35086.966: 99.7858% ( 9) 00:08:21.921 35086.966 - 35288.615: 99.9118% ( 10) 00:08:21.921 35288.615 - 35490.265: 100.0000% ( 7) 00:08:21.921 00:08:21.921 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:21.921 ============================================================================== 00:08:21.921 Range in us Cumulative IO count 00:08:21.921 10233.698 - 10284.111: 0.0378% ( 3) 00:08:21.921 10284.111 - 10334.523: 0.1260% ( 7) 00:08:21.921 10334.523 - 10384.935: 0.2268% ( 8) 00:08:21.921 10384.935 - 10435.348: 0.4284% ( 16) 00:08:21.921 10435.348 - 10485.760: 0.5670% ( 11) 00:08:21.921 10485.760 - 10536.172: 0.6426% ( 6) 00:08:21.921 10536.172 - 10586.585: 0.6804% ( 3) 00:08:21.921 10586.585 - 10636.997: 0.7056% ( 2) 00:08:21.921 10636.997 - 10687.409: 0.7560% ( 4) 00:08:21.921 10687.409 - 10737.822: 0.7812% ( 2) 00:08:21.921 10737.822 - 10788.234: 0.8065% ( 2) 00:08:21.921 13308.849 - 13409.674: 0.8947% ( 7) 00:08:21.921 13409.674 - 13510.498: 1.2097% ( 25) 00:08:21.921 13510.498 - 13611.323: 1.5625% ( 28) 00:08:21.921 13611.323 - 13712.148: 2.3059% ( 59) 00:08:21.921 13712.148 - 13812.972: 3.2888% ( 78) 00:08:21.921 13812.972 - 13913.797: 4.1709% ( 70) 00:08:21.921 13913.797 - 14014.622: 5.0151% ( 67) 00:08:21.921 14014.622 - 14115.446: 5.8594% ( 67) 00:08:21.921 14115.446 - 14216.271: 7.1825% ( 105) 00:08:21.921 14216.271 - 14317.095: 8.6946% ( 120) 00:08:21.921 14317.095 - 14417.920: 10.3453% ( 131) 00:08:21.921 14417.920 - 14518.745: 12.0590% ( 136) 00:08:21.921 14518.745 - 14619.569: 14.4279% ( 188) 00:08:21.921 14619.569 - 14720.394: 17.0615% ( 209) 00:08:21.921 14720.394 - 14821.218: 20.1487% ( 245) 00:08:21.921 14821.218 - 14922.043: 23.5005% ( 266) 00:08:21.921 14922.043 - 15022.868: 27.2429% ( 297) 00:08:21.921 15022.868 - 15123.692: 31.9430% ( 373) 00:08:21.921 15123.692 - 15224.517: 37.8024% ( 465) 00:08:21.921 15224.517 - 15325.342: 42.8049% ( 397) 00:08:21.921 15325.342 - 15426.166: 48.0595% ( 417) 00:08:21.921 15426.166 - 15526.991: 51.5751% ( 279) 00:08:21.921 15526.991 - 15627.815: 54.2465% ( 212) 00:08:21.921 15627.815 - 15728.640: 56.7162% ( 196) 00:08:21.921 15728.640 - 15829.465: 59.9924% ( 260) 00:08:21.921 15829.465 - 15930.289: 63.0418% ( 242) 00:08:21.921 15930.289 - 16031.114: 65.4360% ( 190) 00:08:21.921 16031.114 - 16131.938: 67.0363% ( 127) 00:08:21.921 16131.938 - 16232.763: 68.4476% ( 112) 00:08:21.921 16232.763 - 16333.588: 69.3044% ( 68) 00:08:21.921 16333.588 - 16434.412: 70.2747% ( 77) 00:08:21.921 16434.412 - 16535.237: 71.1946% ( 73) 00:08:21.921 16535.237 - 16636.062: 72.8327% ( 130) 00:08:21.921 16636.062 - 16736.886: 74.5842% ( 139) 00:08:21.921 16736.886 - 16837.711: 76.0711% ( 118) 00:08:21.921 16837.711 - 16938.535: 77.3438% ( 101) 00:08:21.921 16938.535 - 17039.360: 78.6416% ( 103) 00:08:21.921 17039.360 - 17140.185: 79.9269% ( 102) 00:08:21.921 17140.185 - 17241.009: 80.7082% ( 62) 00:08:21.921 17241.009 - 17341.834: 81.5398% ( 66) 00:08:21.921 17341.834 - 17442.658: 82.1951% ( 52) 00:08:21.921 17442.658 - 17543.483: 82.8251% ( 50) 00:08:21.921 17543.483 - 17644.308: 83.3291% ( 40) 00:08:21.921 17644.308 - 17745.132: 83.9718% ( 51) 00:08:21.921 17745.132 - 17845.957: 84.8034% ( 66) 00:08:21.921 17845.957 - 17946.782: 85.5091% ( 56) 00:08:21.921 17946.782 - 18047.606: 86.0131% ( 40) 00:08:21.921 18047.606 - 18148.431: 86.6683% ( 52) 00:08:21.921 18148.431 - 18249.255: 87.0338% ( 29) 00:08:21.921 18249.255 - 18350.080: 87.4370% ( 32) 00:08:21.921 18350.080 - 18450.905: 87.9914% ( 44) 00:08:21.921 18450.905 - 18551.729: 88.6089% ( 49) 00:08:21.921 18551.729 - 18652.554: 89.4657% ( 68) 00:08:21.921 18652.554 - 18753.378: 89.6925% ( 18) 00:08:21.921 18753.378 - 18854.203: 90.0076% ( 25) 00:08:21.921 18854.203 - 18955.028: 90.4108% ( 32) 00:08:21.922 18955.028 - 19055.852: 91.1542% ( 59) 00:08:21.922 19055.852 - 19156.677: 91.9985% ( 67) 00:08:21.922 19156.677 - 19257.502: 92.6285% ( 50) 00:08:21.922 19257.502 - 19358.326: 93.1326% ( 40) 00:08:21.922 19358.326 - 19459.151: 93.5484% ( 33) 00:08:21.922 19459.151 - 19559.975: 93.9642% ( 33) 00:08:21.922 19559.975 - 19660.800: 94.5060% ( 43) 00:08:21.922 19660.800 - 19761.625: 94.7833% ( 22) 00:08:21.922 19761.625 - 19862.449: 94.9975% ( 17) 00:08:21.922 19862.449 - 19963.274: 95.2495% ( 20) 00:08:21.922 19963.274 - 20064.098: 95.4511% ( 16) 00:08:21.922 20064.098 - 20164.923: 95.7409% ( 23) 00:08:21.922 20164.923 - 20265.748: 96.3080% ( 45) 00:08:21.922 20265.748 - 20366.572: 96.5096% ( 16) 00:08:21.922 20366.572 - 20467.397: 96.6608% ( 12) 00:08:21.922 20467.397 - 20568.222: 96.7490% ( 7) 00:08:21.922 20568.222 - 20669.046: 96.7742% ( 2) 00:08:21.922 21979.766 - 22080.591: 96.8372% ( 5) 00:08:21.922 22080.591 - 22181.415: 96.9380% ( 8) 00:08:21.922 22181.415 - 22282.240: 96.9758% ( 3) 00:08:21.922 22282.240 - 22383.065: 97.0514% ( 6) 00:08:21.922 22383.065 - 22483.889: 97.0892% ( 3) 00:08:21.922 22483.889 - 22584.714: 97.1522% ( 5) 00:08:21.922 22584.714 - 22685.538: 97.2278% ( 6) 00:08:21.922 22685.538 - 22786.363: 97.2908% ( 5) 00:08:21.922 22786.363 - 22887.188: 97.3664% ( 6) 00:08:21.922 22887.188 - 22988.012: 97.4420% ( 6) 00:08:21.922 22988.012 - 23088.837: 97.5050% ( 5) 00:08:21.922 23088.837 - 23189.662: 97.5680% ( 5) 00:08:21.922 23189.662 - 23290.486: 97.5806% ( 1) 00:08:21.922 23693.785 - 23794.609: 97.5932% ( 1) 00:08:21.922 23794.609 - 23895.434: 97.7193% ( 10) 00:08:21.922 23895.434 - 23996.258: 97.7949% ( 6) 00:08:21.922 23996.258 - 24097.083: 97.8705% ( 6) 00:08:21.922 24097.083 - 24197.908: 97.9587% ( 7) 00:08:21.922 24197.908 - 24298.732: 98.0343% ( 6) 00:08:21.922 24298.732 - 24399.557: 98.0973% ( 5) 00:08:21.922 24399.557 - 24500.382: 98.1477% ( 4) 00:08:21.922 24500.382 - 24601.206: 98.1981% ( 4) 00:08:21.922 24601.206 - 24702.031: 98.2485% ( 4) 00:08:21.922 24702.031 - 24802.855: 98.2989% ( 4) 00:08:21.922 24802.855 - 24903.680: 98.3493% ( 4) 00:08:21.922 24903.680 - 25004.505: 98.3871% ( 3) 00:08:21.922 25407.803 - 25508.628: 98.4249% ( 3) 00:08:21.922 25508.628 - 25609.452: 98.4879% ( 5) 00:08:21.922 25609.452 - 25710.277: 98.5635% ( 6) 00:08:21.922 25710.277 - 25811.102: 98.6265% ( 5) 00:08:21.922 25811.102 - 26012.751: 98.7525% ( 10) 00:08:21.922 26012.751 - 26214.400: 98.8659% ( 9) 00:08:21.922 26214.400 - 26416.049: 98.9793% ( 9) 00:08:21.922 26416.049 - 26617.698: 99.1179% ( 11) 00:08:21.922 26617.698 - 26819.348: 99.1935% ( 6) 00:08:21.922 33272.123 - 33473.772: 99.2440% ( 4) 00:08:21.922 33473.772 - 33675.422: 99.3700% ( 10) 00:08:21.922 33675.422 - 33877.071: 99.5086% ( 11) 00:08:21.922 33877.071 - 34078.720: 99.6346% ( 10) 00:08:21.922 34078.720 - 34280.369: 99.7606% ( 10) 00:08:21.922 34280.369 - 34482.018: 99.8992% ( 11) 00:08:21.922 34482.018 - 34683.668: 100.0000% ( 8) 00:08:21.922 00:08:21.922 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:21.922 ============================================================================== 00:08:21.922 Range in us Cumulative IO count 00:08:21.922 7763.495 - 7813.908: 0.0126% ( 1) 00:08:21.922 7914.732 - 7965.145: 0.0252% ( 1) 00:08:21.922 7965.145 - 8015.557: 0.1008% ( 6) 00:08:21.922 8015.557 - 8065.969: 0.1638% ( 5) 00:08:21.922 8065.969 - 8116.382: 0.2520% ( 7) 00:08:21.922 8116.382 - 8166.794: 0.4284% ( 14) 00:08:21.922 8166.794 - 8217.206: 0.5166% ( 7) 00:08:21.922 8217.206 - 8267.618: 0.5670% ( 4) 00:08:21.922 8267.618 - 8318.031: 0.6174% ( 4) 00:08:21.922 8318.031 - 8368.443: 0.6552% ( 3) 00:08:21.922 8368.443 - 8418.855: 0.6930% ( 3) 00:08:21.922 8418.855 - 8469.268: 0.7308% ( 3) 00:08:21.922 8469.268 - 8519.680: 0.7686% ( 3) 00:08:21.922 8519.680 - 8570.092: 0.7939% ( 2) 00:08:21.922 8570.092 - 8620.505: 0.8065% ( 1) 00:08:21.922 12754.314 - 12804.726: 0.8191% ( 1) 00:08:21.922 12905.551 - 13006.375: 0.8443% ( 2) 00:08:21.922 13006.375 - 13107.200: 0.9451% ( 8) 00:08:21.922 13107.200 - 13208.025: 1.0459% ( 8) 00:08:21.922 13208.025 - 13308.849: 1.2349% ( 15) 00:08:21.922 13308.849 - 13409.674: 1.5751% ( 27) 00:08:21.922 13409.674 - 13510.498: 1.8271% ( 20) 00:08:21.922 13510.498 - 13611.323: 2.1799% ( 28) 00:08:21.922 13611.323 - 13712.148: 2.5958% ( 33) 00:08:21.922 13712.148 - 13812.972: 3.1250% ( 42) 00:08:21.922 13812.972 - 13913.797: 3.7424% ( 49) 00:08:21.922 13913.797 - 14014.622: 4.9017% ( 92) 00:08:21.922 14014.622 - 14115.446: 5.9728% ( 85) 00:08:21.922 14115.446 - 14216.271: 7.5353% ( 124) 00:08:21.922 14216.271 - 14317.095: 9.0726% ( 122) 00:08:21.922 14317.095 - 14417.920: 10.8367% ( 140) 00:08:21.922 14417.920 - 14518.745: 13.3821% ( 202) 00:08:21.922 14518.745 - 14619.569: 16.4315% ( 242) 00:08:21.922 14619.569 - 14720.394: 19.3296% ( 230) 00:08:21.922 14720.394 - 14821.218: 22.4042% ( 244) 00:08:21.922 14821.218 - 14922.043: 25.4788% ( 244) 00:08:21.922 14922.043 - 15022.868: 29.0827% ( 286) 00:08:21.922 15022.868 - 15123.692: 33.4929% ( 350) 00:08:21.922 15123.692 - 15224.517: 37.8150% ( 343) 00:08:21.922 15224.517 - 15325.342: 42.4899% ( 371) 00:08:21.922 15325.342 - 15426.166: 47.3790% ( 388) 00:08:21.922 15426.166 - 15526.991: 52.1295% ( 377) 00:08:21.922 15526.991 - 15627.815: 55.6452% ( 279) 00:08:21.922 15627.815 - 15728.640: 58.4551% ( 223) 00:08:21.922 15728.640 - 15829.465: 61.5801% ( 248) 00:08:21.922 15829.465 - 15930.289: 63.4577% ( 149) 00:08:21.922 15930.289 - 16031.114: 65.1588% ( 135) 00:08:21.922 16031.114 - 16131.938: 66.9481% ( 142) 00:08:21.922 16131.938 - 16232.763: 68.5862% ( 130) 00:08:21.922 16232.763 - 16333.588: 69.8085% ( 97) 00:08:21.922 16333.588 - 16434.412: 71.2954% ( 118) 00:08:21.922 16434.412 - 16535.237: 72.8579% ( 124) 00:08:21.922 16535.237 - 16636.062: 73.9541% ( 87) 00:08:21.922 16636.062 - 16736.886: 75.5292% ( 125) 00:08:21.922 16736.886 - 16837.711: 76.9531% ( 113) 00:08:21.922 16837.711 - 16938.535: 77.9486% ( 79) 00:08:21.922 16938.535 - 17039.360: 78.9315% ( 78) 00:08:21.922 17039.360 - 17140.185: 79.5993% ( 53) 00:08:21.922 17140.185 - 17241.009: 80.2797% ( 54) 00:08:21.922 17241.009 - 17341.834: 81.0106% ( 58) 00:08:21.922 17341.834 - 17442.658: 81.7414% ( 58) 00:08:21.922 17442.658 - 17543.483: 82.6361% ( 71) 00:08:21.922 17543.483 - 17644.308: 83.2031% ( 45) 00:08:21.922 17644.308 - 17745.132: 83.7450% ( 43) 00:08:21.922 17745.132 - 17845.957: 84.2994% ( 44) 00:08:21.922 17845.957 - 17946.782: 84.8412% ( 43) 00:08:21.922 17946.782 - 18047.606: 85.3579% ( 41) 00:08:21.922 18047.606 - 18148.431: 85.8871% ( 42) 00:08:21.922 18148.431 - 18249.255: 86.2399% ( 28) 00:08:21.922 18249.255 - 18350.080: 86.6053% ( 29) 00:08:21.922 18350.080 - 18450.905: 86.9582% ( 28) 00:08:21.922 18450.905 - 18551.729: 87.5378% ( 46) 00:08:21.922 18551.729 - 18652.554: 88.0796% ( 43) 00:08:21.922 18652.554 - 18753.378: 88.6593% ( 46) 00:08:21.922 18753.378 - 18854.203: 89.3901% ( 58) 00:08:21.922 18854.203 - 18955.028: 89.7807% ( 31) 00:08:21.922 18955.028 - 19055.852: 90.2344% ( 36) 00:08:21.922 19055.852 - 19156.677: 90.5620% ( 26) 00:08:21.922 19156.677 - 19257.502: 90.9022% ( 27) 00:08:21.922 19257.502 - 19358.326: 91.5953% ( 55) 00:08:21.922 19358.326 - 19459.151: 92.4521% ( 68) 00:08:21.922 19459.151 - 19559.975: 93.2838% ( 66) 00:08:21.922 19559.975 - 19660.800: 93.7878% ( 40) 00:08:21.922 19660.800 - 19761.625: 94.2036% ( 33) 00:08:21.922 19761.625 - 19862.449: 94.5060% ( 24) 00:08:21.922 19862.449 - 19963.274: 94.8211% ( 25) 00:08:21.922 19963.274 - 20064.098: 95.2873% ( 37) 00:08:21.922 20064.098 - 20164.923: 95.8039% ( 41) 00:08:21.922 20164.923 - 20265.748: 96.0938% ( 23) 00:08:21.922 20265.748 - 20366.572: 96.2954% ( 16) 00:08:21.922 20366.572 - 20467.397: 96.4340% ( 11) 00:08:21.922 20467.397 - 20568.222: 96.4970% ( 5) 00:08:21.922 20568.222 - 20669.046: 96.5726% ( 6) 00:08:21.922 20669.046 - 20769.871: 96.6482% ( 6) 00:08:21.922 20769.871 - 20870.695: 96.7112% ( 5) 00:08:21.922 20870.695 - 20971.520: 96.7490% ( 3) 00:08:21.922 20971.520 - 21072.345: 96.7742% ( 2) 00:08:21.922 21677.292 - 21778.117: 96.7994% ( 2) 00:08:21.922 21778.117 - 21878.942: 96.8876% ( 7) 00:08:21.922 21878.942 - 21979.766: 96.9380% ( 4) 00:08:21.922 21979.766 - 22080.591: 96.9758% ( 3) 00:08:21.922 22080.591 - 22181.415: 97.0388% ( 5) 00:08:21.922 22181.415 - 22282.240: 97.1018% ( 5) 00:08:21.922 22282.240 - 22383.065: 97.1522% ( 4) 00:08:21.922 22383.065 - 22483.889: 97.2152% ( 5) 00:08:21.922 22483.889 - 22584.714: 97.2782% ( 5) 00:08:21.922 22584.714 - 22685.538: 97.3412% ( 5) 00:08:21.922 22685.538 - 22786.363: 97.4042% ( 5) 00:08:21.922 22786.363 - 22887.188: 97.4546% ( 4) 00:08:21.922 22887.188 - 22988.012: 97.5176% ( 5) 00:08:21.922 22988.012 - 23088.837: 97.5680% ( 4) 00:08:21.922 23088.837 - 23189.662: 97.7067% ( 11) 00:08:21.922 23189.662 - 23290.486: 97.7697% ( 5) 00:08:21.922 23290.486 - 23391.311: 97.8453% ( 6) 00:08:21.922 23391.311 - 23492.135: 97.8957% ( 4) 00:08:21.922 23492.135 - 23592.960: 98.0847% ( 15) 00:08:21.922 23592.960 - 23693.785: 98.1477% ( 5) 00:08:21.922 23693.785 - 23794.609: 98.1729% ( 2) 00:08:21.922 23794.609 - 23895.434: 98.2233% ( 4) 00:08:21.922 23895.434 - 23996.258: 98.2485% ( 2) 00:08:21.922 23996.258 - 24097.083: 98.2863% ( 3) 00:08:21.923 24097.083 - 24197.908: 98.3115% ( 2) 00:08:21.923 24197.908 - 24298.732: 98.3493% ( 3) 00:08:21.923 24298.732 - 24399.557: 98.3745% ( 2) 00:08:21.923 24399.557 - 24500.382: 98.3871% ( 1) 00:08:21.923 25811.102 - 26012.751: 98.4123% ( 2) 00:08:21.923 26012.751 - 26214.400: 98.5383% ( 10) 00:08:21.923 26214.400 - 26416.049: 98.6769% ( 11) 00:08:21.923 26416.049 - 26617.698: 98.8029% ( 10) 00:08:21.923 26617.698 - 26819.348: 98.9415% ( 11) 00:08:21.923 26819.348 - 27020.997: 99.0801% ( 11) 00:08:21.923 27020.997 - 27222.646: 99.1935% ( 9) 00:08:21.923 34280.369 - 34482.018: 99.2314% ( 3) 00:08:21.923 34482.018 - 34683.668: 99.3322% ( 8) 00:08:21.923 34683.668 - 34885.317: 99.4330% ( 8) 00:08:21.923 34885.317 - 35086.966: 99.5716% ( 11) 00:08:21.923 35086.966 - 35288.615: 99.7102% ( 11) 00:08:21.923 35288.615 - 35490.265: 99.8488% ( 11) 00:08:21.923 35490.265 - 35691.914: 99.9874% ( 11) 00:08:21.923 35691.914 - 35893.563: 100.0000% ( 1) 00:08:21.923 00:08:21.923 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:21.923 ============================================================================== 00:08:21.923 Range in us Cumulative IO count 00:08:21.923 6906.486 - 6956.898: 0.0126% ( 1) 00:08:21.923 6956.898 - 7007.311: 0.0882% ( 6) 00:08:21.923 7007.311 - 7057.723: 0.1512% ( 5) 00:08:21.923 7057.723 - 7108.135: 0.2646% ( 9) 00:08:21.923 7108.135 - 7158.548: 0.3276% ( 5) 00:08:21.923 7158.548 - 7208.960: 0.4284% ( 8) 00:08:21.923 7208.960 - 7259.372: 0.4662% ( 3) 00:08:21.923 7259.372 - 7309.785: 0.5040% ( 3) 00:08:21.923 7309.785 - 7360.197: 0.5544% ( 4) 00:08:21.923 7360.197 - 7410.609: 0.6048% ( 4) 00:08:21.923 7410.609 - 7461.022: 0.7056% ( 8) 00:08:21.923 7461.022 - 7511.434: 0.7939% ( 7) 00:08:21.923 7511.434 - 7561.846: 0.8065% ( 1) 00:08:21.923 12905.551 - 13006.375: 0.8569% ( 4) 00:08:21.923 13006.375 - 13107.200: 0.9451% ( 7) 00:08:21.923 13107.200 - 13208.025: 1.0711% ( 10) 00:08:21.923 13208.025 - 13308.849: 1.2853% ( 17) 00:08:21.923 13308.849 - 13409.674: 1.5373% ( 20) 00:08:21.923 13409.674 - 13510.498: 1.8901% ( 28) 00:08:21.923 13510.498 - 13611.323: 2.5076% ( 49) 00:08:21.923 13611.323 - 13712.148: 2.8478% ( 27) 00:08:21.923 13712.148 - 13812.972: 3.4778% ( 50) 00:08:21.923 13812.972 - 13913.797: 4.3347% ( 68) 00:08:21.923 13913.797 - 14014.622: 5.5444% ( 96) 00:08:21.923 14014.622 - 14115.446: 7.0439% ( 119) 00:08:21.923 14115.446 - 14216.271: 8.6190% ( 125) 00:08:21.923 14216.271 - 14317.095: 10.1941% ( 125) 00:08:21.923 14317.095 - 14417.920: 12.1346% ( 154) 00:08:21.923 14417.920 - 14518.745: 13.6719% ( 122) 00:08:21.923 14518.745 - 14619.569: 15.2470% ( 125) 00:08:21.923 14619.569 - 14720.394: 17.7041% ( 195) 00:08:21.923 14720.394 - 14821.218: 21.2198% ( 279) 00:08:21.923 14821.218 - 14922.043: 25.6930% ( 355) 00:08:21.923 14922.043 - 15022.868: 30.2167% ( 359) 00:08:21.923 15022.868 - 15123.692: 35.0554% ( 384) 00:08:21.923 15123.692 - 15224.517: 40.2848% ( 415) 00:08:21.923 15224.517 - 15325.342: 44.4682% ( 332) 00:08:21.923 15325.342 - 15426.166: 47.6689% ( 254) 00:08:21.923 15426.166 - 15526.991: 50.3024% ( 209) 00:08:21.923 15526.991 - 15627.815: 53.4274% ( 248) 00:08:21.923 15627.815 - 15728.640: 55.6830% ( 179) 00:08:21.923 15728.640 - 15829.465: 58.3291% ( 210) 00:08:21.923 15829.465 - 15930.289: 61.1517% ( 224) 00:08:21.923 15930.289 - 16031.114: 63.8483% ( 214) 00:08:21.923 16031.114 - 16131.938: 66.5071% ( 211) 00:08:21.923 16131.938 - 16232.763: 68.7626% ( 179) 00:08:21.923 16232.763 - 16333.588: 70.5771% ( 144) 00:08:21.923 16333.588 - 16434.412: 72.1396% ( 124) 00:08:21.923 16434.412 - 16535.237: 73.6643% ( 121) 00:08:21.923 16535.237 - 16636.062: 75.1638% ( 119) 00:08:21.923 16636.062 - 16736.886: 76.4617% ( 103) 00:08:21.923 16736.886 - 16837.711: 78.2762% ( 144) 00:08:21.923 16837.711 - 16938.535: 79.1079% ( 66) 00:08:21.923 16938.535 - 17039.360: 79.7127% ( 48) 00:08:21.923 17039.360 - 17140.185: 80.2293% ( 41) 00:08:21.923 17140.185 - 17241.009: 80.6452% ( 33) 00:08:21.923 17241.009 - 17341.834: 81.1996% ( 44) 00:08:21.923 17341.834 - 17442.658: 81.8422% ( 51) 00:08:21.923 17442.658 - 17543.483: 82.1699% ( 26) 00:08:21.923 17543.483 - 17644.308: 82.4849% ( 25) 00:08:21.923 17644.308 - 17745.132: 82.8125% ( 26) 00:08:21.923 17745.132 - 17845.957: 83.0393% ( 18) 00:08:21.923 17845.957 - 17946.782: 83.5181% ( 38) 00:08:21.923 17946.782 - 18047.606: 84.2364% ( 57) 00:08:21.923 18047.606 - 18148.431: 85.0176% ( 62) 00:08:21.923 18148.431 - 18249.255: 85.8619% ( 67) 00:08:21.923 18249.255 - 18350.080: 86.5675% ( 56) 00:08:21.923 18350.080 - 18450.905: 87.1346% ( 45) 00:08:21.923 18450.905 - 18551.729: 88.0418% ( 72) 00:08:21.923 18551.729 - 18652.554: 88.2939% ( 20) 00:08:21.923 18652.554 - 18753.378: 88.4829% ( 15) 00:08:21.923 18753.378 - 18854.203: 88.6971% ( 17) 00:08:21.923 18854.203 - 18955.028: 88.9869% ( 23) 00:08:21.923 18955.028 - 19055.852: 89.6925% ( 56) 00:08:21.923 19055.852 - 19156.677: 90.3478% ( 52) 00:08:21.923 19156.677 - 19257.502: 91.1290% ( 62) 00:08:21.923 19257.502 - 19358.326: 91.9859% ( 68) 00:08:21.923 19358.326 - 19459.151: 92.6033% ( 49) 00:08:21.923 19459.151 - 19559.975: 93.6114% ( 80) 00:08:21.923 19559.975 - 19660.800: 94.2666% ( 52) 00:08:21.923 19660.800 - 19761.625: 94.8967% ( 50) 00:08:21.923 19761.625 - 19862.449: 95.2621% ( 29) 00:08:21.923 19862.449 - 19963.274: 95.5771% ( 25) 00:08:21.923 19963.274 - 20064.098: 95.7157% ( 11) 00:08:21.923 20064.098 - 20164.923: 95.8669% ( 12) 00:08:21.923 20164.923 - 20265.748: 96.0433% ( 14) 00:08:21.923 20265.748 - 20366.572: 96.1694% ( 10) 00:08:21.923 20366.572 - 20467.397: 96.2450% ( 6) 00:08:21.923 20467.397 - 20568.222: 96.3458% ( 8) 00:08:21.923 20568.222 - 20669.046: 96.4844% ( 11) 00:08:21.923 20669.046 - 20769.871: 96.5474% ( 5) 00:08:21.923 20769.871 - 20870.695: 96.6230% ( 6) 00:08:21.923 20870.695 - 20971.520: 96.6734% ( 4) 00:08:21.923 20971.520 - 21072.345: 96.7238% ( 4) 00:08:21.923 21072.345 - 21173.169: 96.7868% ( 5) 00:08:21.923 21173.169 - 21273.994: 96.9002% ( 9) 00:08:21.923 21273.994 - 21374.818: 96.9632% ( 5) 00:08:21.923 21374.818 - 21475.643: 97.0010% ( 3) 00:08:21.923 21475.643 - 21576.468: 97.0514% ( 4) 00:08:21.923 21576.468 - 21677.292: 97.1018% ( 4) 00:08:21.923 21677.292 - 21778.117: 97.1648% ( 5) 00:08:21.923 21778.117 - 21878.942: 97.2278% ( 5) 00:08:21.923 21878.942 - 21979.766: 97.2908% ( 5) 00:08:21.923 21979.766 - 22080.591: 97.3538% ( 5) 00:08:21.923 22080.591 - 22181.415: 97.4042% ( 4) 00:08:21.923 22181.415 - 22282.240: 97.4546% ( 4) 00:08:21.923 22282.240 - 22383.065: 97.5176% ( 5) 00:08:21.923 22383.065 - 22483.889: 97.5680% ( 4) 00:08:21.923 22483.889 - 22584.714: 97.5806% ( 1) 00:08:21.923 22887.188 - 22988.012: 97.5932% ( 1) 00:08:21.923 22988.012 - 23088.837: 97.6562% ( 5) 00:08:21.923 23088.837 - 23189.662: 97.7697% ( 9) 00:08:21.923 23189.662 - 23290.486: 97.8579% ( 7) 00:08:21.923 23290.486 - 23391.311: 97.9587% ( 8) 00:08:21.923 23391.311 - 23492.135: 98.0595% ( 8) 00:08:21.923 23492.135 - 23592.960: 98.1603% ( 8) 00:08:21.923 23592.960 - 23693.785: 98.2107% ( 4) 00:08:21.923 23693.785 - 23794.609: 98.2737% ( 5) 00:08:21.923 23794.609 - 23895.434: 98.3241% ( 4) 00:08:21.923 23895.434 - 23996.258: 98.3619% ( 3) 00:08:21.923 23996.258 - 24097.083: 98.3871% ( 2) 00:08:21.923 25508.628 - 25609.452: 98.4249% ( 3) 00:08:21.923 25609.452 - 25710.277: 98.4879% ( 5) 00:08:21.923 25710.277 - 25811.102: 98.5509% ( 5) 00:08:21.923 25811.102 - 26012.751: 98.6769% ( 10) 00:08:21.923 26012.751 - 26214.400: 98.7903% ( 9) 00:08:21.923 26214.400 - 26416.049: 98.9289% ( 11) 00:08:21.923 26416.049 - 26617.698: 99.0675% ( 11) 00:08:21.923 26617.698 - 26819.348: 99.1935% ( 10) 00:08:21.923 34280.369 - 34482.018: 99.2314% ( 3) 00:08:21.923 34482.018 - 34683.668: 99.3070% ( 6) 00:08:21.923 34683.668 - 34885.317: 99.3952% ( 7) 00:08:21.923 34885.317 - 35086.966: 99.5338% ( 11) 00:08:21.923 35086.966 - 35288.615: 99.6472% ( 9) 00:08:21.923 35288.615 - 35490.265: 99.7732% ( 10) 00:08:21.923 35490.265 - 35691.914: 99.9118% ( 11) 00:08:21.923 35691.914 - 35893.563: 100.0000% ( 7) 00:08:21.923 00:08:21.923 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:21.923 ============================================================================== 00:08:21.923 Range in us Cumulative IO count 00:08:21.923 6200.714 - 6225.920: 0.0125% ( 1) 00:08:21.923 6225.920 - 6251.126: 0.0250% ( 1) 00:08:21.923 6251.126 - 6276.332: 0.0500% ( 2) 00:08:21.923 6276.332 - 6301.538: 0.1375% ( 7) 00:08:21.923 6301.538 - 6326.745: 0.2125% ( 6) 00:08:21.923 6326.745 - 6351.951: 0.3375% ( 10) 00:08:21.923 6351.951 - 6377.157: 0.4250% ( 7) 00:08:21.923 6377.157 - 6402.363: 0.5000% ( 6) 00:08:21.923 6402.363 - 6427.569: 0.5625% ( 5) 00:08:21.923 6427.569 - 6452.775: 0.5750% ( 1) 00:08:21.923 6452.775 - 6503.188: 0.6250% ( 4) 00:08:21.923 6503.188 - 6553.600: 0.6500% ( 2) 00:08:21.923 6553.600 - 6604.012: 0.6875% ( 3) 00:08:21.923 6604.012 - 6654.425: 0.7250% ( 3) 00:08:21.923 6654.425 - 6704.837: 0.7625% ( 3) 00:08:21.923 6704.837 - 6755.249: 0.8000% ( 3) 00:08:21.923 12905.551 - 13006.375: 0.8125% ( 1) 00:08:21.923 13006.375 - 13107.200: 0.8750% ( 5) 00:08:21.923 13107.200 - 13208.025: 1.0875% ( 17) 00:08:21.923 13208.025 - 13308.849: 1.3250% ( 19) 00:08:21.923 13308.849 - 13409.674: 1.7250% ( 32) 00:08:21.923 13409.674 - 13510.498: 2.2375% ( 41) 00:08:21.923 13510.498 - 13611.323: 2.7500% ( 41) 00:08:21.923 13611.323 - 13712.148: 3.5000% ( 60) 00:08:21.923 13712.148 - 13812.972: 4.3000% ( 64) 00:08:21.924 13812.972 - 13913.797: 5.4875% ( 95) 00:08:21.924 13913.797 - 14014.622: 6.8375% ( 108) 00:08:21.924 14014.622 - 14115.446: 8.0625% ( 98) 00:08:21.924 14115.446 - 14216.271: 9.2875% ( 98) 00:08:21.924 14216.271 - 14317.095: 10.7250% ( 115) 00:08:21.924 14317.095 - 14417.920: 12.2875% ( 125) 00:08:21.924 14417.920 - 14518.745: 14.0125% ( 138) 00:08:21.924 14518.745 - 14619.569: 15.8000% ( 143) 00:08:21.924 14619.569 - 14720.394: 18.5750% ( 222) 00:08:21.924 14720.394 - 14821.218: 21.1125% ( 203) 00:08:21.924 14821.218 - 14922.043: 24.2750% ( 253) 00:08:21.924 14922.043 - 15022.868: 28.4125% ( 331) 00:08:21.924 15022.868 - 15123.692: 33.4750% ( 405) 00:08:21.924 15123.692 - 15224.517: 38.6375% ( 413) 00:08:21.924 15224.517 - 15325.342: 43.1375% ( 360) 00:08:21.924 15325.342 - 15426.166: 46.4875% ( 268) 00:08:21.924 15426.166 - 15526.991: 49.5750% ( 247) 00:08:21.924 15526.991 - 15627.815: 52.2875% ( 217) 00:08:21.924 15627.815 - 15728.640: 55.8750% ( 287) 00:08:21.924 15728.640 - 15829.465: 58.6250% ( 220) 00:08:21.924 15829.465 - 15930.289: 61.0750% ( 196) 00:08:21.924 15930.289 - 16031.114: 64.5625% ( 279) 00:08:21.924 16031.114 - 16131.938: 67.1750% ( 209) 00:08:21.924 16131.938 - 16232.763: 69.5625% ( 191) 00:08:21.924 16232.763 - 16333.588: 71.8750% ( 185) 00:08:21.924 16333.588 - 16434.412: 73.5000% ( 130) 00:08:21.924 16434.412 - 16535.237: 74.7125% ( 97) 00:08:21.924 16535.237 - 16636.062: 76.4375% ( 138) 00:08:21.924 16636.062 - 16736.886: 77.7250% ( 103) 00:08:21.924 16736.886 - 16837.711: 78.9250% ( 96) 00:08:21.924 16837.711 - 16938.535: 79.9375% ( 81) 00:08:21.924 16938.535 - 17039.360: 80.3250% ( 31) 00:08:21.924 17039.360 - 17140.185: 80.6000% ( 22) 00:08:21.924 17140.185 - 17241.009: 80.7250% ( 10) 00:08:21.924 17241.009 - 17341.834: 80.7750% ( 4) 00:08:21.924 17341.834 - 17442.658: 80.8375% ( 5) 00:08:21.924 17442.658 - 17543.483: 81.0250% ( 15) 00:08:21.924 17543.483 - 17644.308: 81.4750% ( 36) 00:08:21.924 17644.308 - 17745.132: 82.2250% ( 60) 00:08:21.924 17745.132 - 17845.957: 82.6875% ( 37) 00:08:21.924 17845.957 - 17946.782: 83.2250% ( 43) 00:08:21.924 17946.782 - 18047.606: 83.8000% ( 46) 00:08:21.924 18047.606 - 18148.431: 84.4125% ( 49) 00:08:21.924 18148.431 - 18249.255: 85.1375% ( 58) 00:08:21.924 18249.255 - 18350.080: 86.0375% ( 72) 00:08:21.924 18350.080 - 18450.905: 87.1000% ( 85) 00:08:21.924 18450.905 - 18551.729: 88.4875% ( 111) 00:08:21.924 18551.729 - 18652.554: 89.4625% ( 78) 00:08:21.924 18652.554 - 18753.378: 90.2625% ( 64) 00:08:21.924 18753.378 - 18854.203: 91.0375% ( 62) 00:08:21.924 18854.203 - 18955.028: 91.7500% ( 57) 00:08:21.924 18955.028 - 19055.852: 92.3500% ( 48) 00:08:21.924 19055.852 - 19156.677: 92.7375% ( 31) 00:08:21.924 19156.677 - 19257.502: 93.0000% ( 21) 00:08:21.924 19257.502 - 19358.326: 93.4250% ( 34) 00:08:21.924 19358.326 - 19459.151: 93.7125% ( 23) 00:08:21.924 19459.151 - 19559.975: 94.0875% ( 30) 00:08:21.924 19559.975 - 19660.800: 94.3875% ( 24) 00:08:21.924 19660.800 - 19761.625: 94.8500% ( 37) 00:08:21.924 19761.625 - 19862.449: 95.1375% ( 23) 00:08:21.924 19862.449 - 19963.274: 95.3500% ( 17) 00:08:21.924 19963.274 - 20064.098: 95.5875% ( 19) 00:08:21.924 20064.098 - 20164.923: 95.8500% ( 21) 00:08:21.924 20164.923 - 20265.748: 96.1000% ( 20) 00:08:21.924 20265.748 - 20366.572: 96.3625% ( 21) 00:08:21.924 20366.572 - 20467.397: 96.5500% ( 15) 00:08:21.924 20467.397 - 20568.222: 96.6500% ( 8) 00:08:21.924 20568.222 - 20669.046: 96.8000% ( 12) 00:08:21.924 20669.046 - 20769.871: 96.9375% ( 11) 00:08:21.924 20769.871 - 20870.695: 97.0750% ( 11) 00:08:21.924 20870.695 - 20971.520: 97.2500% ( 14) 00:08:21.924 20971.520 - 21072.345: 97.6000% ( 28) 00:08:21.924 21072.345 - 21173.169: 97.7125% ( 9) 00:08:21.924 21173.169 - 21273.994: 97.8125% ( 8) 00:08:21.924 21273.994 - 21374.818: 97.9250% ( 9) 00:08:21.924 21374.818 - 21475.643: 98.0000% ( 6) 00:08:21.924 21475.643 - 21576.468: 98.0625% ( 5) 00:08:21.924 21576.468 - 21677.292: 98.1375% ( 6) 00:08:21.924 21677.292 - 21778.117: 98.2125% ( 6) 00:08:21.924 21778.117 - 21878.942: 98.2750% ( 5) 00:08:21.924 21878.942 - 21979.766: 98.3500% ( 6) 00:08:21.924 21979.766 - 22080.591: 98.4000% ( 4) 00:08:21.924 22483.889 - 22584.714: 98.4625% ( 5) 00:08:21.924 22584.714 - 22685.538: 98.5625% ( 8) 00:08:21.924 22685.538 - 22786.363: 98.6750% ( 9) 00:08:21.924 22786.363 - 22887.188: 98.7500% ( 6) 00:08:21.924 22887.188 - 22988.012: 98.8000% ( 4) 00:08:21.924 22988.012 - 23088.837: 98.8625% ( 5) 00:08:21.924 23088.837 - 23189.662: 98.9375% ( 6) 00:08:21.924 23189.662 - 23290.486: 98.9875% ( 4) 00:08:21.924 23290.486 - 23391.311: 99.0375% ( 4) 00:08:21.924 23391.311 - 23492.135: 99.0750% ( 3) 00:08:21.924 23492.135 - 23592.960: 99.1250% ( 4) 00:08:21.924 23592.960 - 23693.785: 99.1875% ( 5) 00:08:21.924 23693.785 - 23794.609: 99.2000% ( 1) 00:08:21.924 25811.102 - 26012.751: 99.2875% ( 7) 00:08:21.924 26012.751 - 26214.400: 99.4250% ( 11) 00:08:21.924 26214.400 - 26416.049: 99.5250% ( 8) 00:08:21.924 26416.049 - 26617.698: 99.6500% ( 10) 00:08:21.924 26617.698 - 26819.348: 99.8000% ( 12) 00:08:21.924 26819.348 - 27020.997: 99.9375% ( 11) 00:08:21.924 27020.997 - 27222.646: 100.0000% ( 5) 00:08:21.924 00:08:21.924 03:17:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:21.924 00:08:21.924 real 0m2.546s 00:08:21.924 user 0m2.165s 00:08:21.924 sys 0m0.243s 00:08:21.924 03:17:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.924 03:17:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:21.924 ************************************ 00:08:21.924 END TEST nvme_perf 00:08:21.924 ************************************ 00:08:22.186 03:17:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:22.186 03:17:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.186 03:17:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.186 03:17:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.186 ************************************ 00:08:22.186 START TEST nvme_hello_world 00:08:22.186 ************************************ 00:08:22.186 03:17:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:22.186 Initializing NVMe Controllers 00:08:22.186 Attached to 0000:00:13.0 00:08:22.186 Namespace ID: 1 size: 1GB 00:08:22.186 Attached to 0000:00:10.0 00:08:22.186 Namespace ID: 1 size: 6GB 00:08:22.186 Attached to 0000:00:11.0 00:08:22.186 Namespace ID: 1 size: 5GB 00:08:22.186 Attached to 0000:00:12.0 00:08:22.186 Namespace ID: 1 size: 4GB 00:08:22.186 Namespace ID: 2 size: 4GB 00:08:22.186 Namespace ID: 3 size: 4GB 00:08:22.186 Initialization complete. 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 INFO: using host memory buffer for IO 00:08:22.186 Hello world! 00:08:22.186 00:08:22.186 real 0m0.233s 00:08:22.186 user 0m0.084s 00:08:22.186 sys 0m0.106s 00:08:22.186 ************************************ 00:08:22.186 END TEST nvme_hello_world 00:08:22.186 ************************************ 00:08:22.186 03:17:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.186 03:17:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 03:17:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:22.448 03:17:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.448 03:17:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.448 03:17:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 ************************************ 00:08:22.448 START TEST nvme_sgl 00:08:22.448 ************************************ 00:08:22.448 03:17:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:22.448 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:22.448 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:22.448 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:22.448 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:22.448 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:22.710 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:22.710 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:22.710 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:22.710 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:22.710 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:22.710 NVMe Readv/Writev Request test 00:08:22.710 Attached to 0000:00:13.0 00:08:22.710 Attached to 0000:00:10.0 00:08:22.710 Attached to 0000:00:11.0 00:08:22.710 Attached to 0000:00:12.0 00:08:22.710 0000:00:10.0: build_io_request_2 test passed 00:08:22.710 0000:00:10.0: build_io_request_4 test passed 00:08:22.710 0000:00:10.0: build_io_request_5 test passed 00:08:22.710 0000:00:10.0: build_io_request_6 test passed 00:08:22.710 0000:00:10.0: build_io_request_7 test passed 00:08:22.710 0000:00:10.0: build_io_request_10 test passed 00:08:22.710 0000:00:11.0: build_io_request_2 test passed 00:08:22.710 0000:00:11.0: build_io_request_4 test passed 00:08:22.710 0000:00:11.0: build_io_request_5 test passed 00:08:22.710 0000:00:11.0: build_io_request_6 test passed 00:08:22.710 0000:00:11.0: build_io_request_7 test passed 00:08:22.710 0000:00:11.0: build_io_request_10 test passed 00:08:22.710 Cleaning up... 00:08:22.710 00:08:22.710 real 0m0.304s 00:08:22.710 user 0m0.142s 00:08:22.710 sys 0m0.110s 00:08:22.710 ************************************ 00:08:22.710 END TEST nvme_sgl 00:08:22.710 ************************************ 00:08:22.710 03:17:10 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.710 03:17:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:22.710 03:17:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:22.710 03:17:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.710 03:17:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.710 03:17:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.710 ************************************ 00:08:22.710 START TEST nvme_e2edp 00:08:22.710 ************************************ 00:08:22.710 03:17:10 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:22.972 NVMe Write/Read with End-to-End data protection test 00:08:22.972 Attached to 0000:00:13.0 00:08:22.972 Attached to 0000:00:10.0 00:08:22.972 Attached to 0000:00:11.0 00:08:22.972 Attached to 0000:00:12.0 00:08:22.972 Cleaning up... 00:08:22.972 00:08:22.972 real 0m0.216s 00:08:22.972 user 0m0.065s 00:08:22.972 sys 0m0.106s 00:08:22.972 03:17:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.972 03:17:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 ************************************ 00:08:22.972 END TEST nvme_e2edp 00:08:22.972 ************************************ 00:08:22.972 03:17:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:22.972 03:17:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.972 03:17:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.972 03:17:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.973 ************************************ 00:08:22.973 START TEST nvme_reserve 00:08:22.973 ************************************ 00:08:22.973 03:17:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:23.234 ===================================================== 00:08:23.234 NVMe Controller at PCI bus 0, device 19, function 0 00:08:23.234 ===================================================== 00:08:23.234 Reservations: Not Supported 00:08:23.234 ===================================================== 00:08:23.234 NVMe Controller at PCI bus 0, device 16, function 0 00:08:23.234 ===================================================== 00:08:23.234 Reservations: Not Supported 00:08:23.234 ===================================================== 00:08:23.234 NVMe Controller at PCI bus 0, device 17, function 0 00:08:23.234 ===================================================== 00:08:23.234 Reservations: Not Supported 00:08:23.234 ===================================================== 00:08:23.234 NVMe Controller at PCI bus 0, device 18, function 0 00:08:23.234 ===================================================== 00:08:23.234 Reservations: Not Supported 00:08:23.234 Reservation test passed 00:08:23.234 00:08:23.234 real 0m0.238s 00:08:23.234 user 0m0.077s 00:08:23.234 sys 0m0.110s 00:08:23.234 03:17:10 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.234 ************************************ 00:08:23.234 END TEST nvme_reserve 00:08:23.234 ************************************ 00:08:23.234 03:17:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:23.234 03:17:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:23.234 03:17:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.234 03:17:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.234 03:17:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.234 ************************************ 00:08:23.234 START TEST nvme_err_injection 00:08:23.234 ************************************ 00:08:23.234 03:17:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:23.494 NVMe Error Injection test 00:08:23.494 Attached to 0000:00:13.0 00:08:23.494 Attached to 0000:00:10.0 00:08:23.494 Attached to 0000:00:11.0 00:08:23.494 Attached to 0000:00:12.0 00:08:23.494 0000:00:13.0: get features failed as expected 00:08:23.494 0000:00:10.0: get features failed as expected 00:08:23.494 0000:00:11.0: get features failed as expected 00:08:23.494 0000:00:12.0: get features failed as expected 00:08:23.494 0000:00:13.0: get features successfully as expected 00:08:23.494 0000:00:10.0: get features successfully as expected 00:08:23.494 0000:00:11.0: get features successfully as expected 00:08:23.494 0000:00:12.0: get features successfully as expected 00:08:23.494 0000:00:12.0: read failed as expected 00:08:23.494 0000:00:13.0: read failed as expected 00:08:23.494 0000:00:10.0: read failed as expected 00:08:23.494 0000:00:11.0: read failed as expected 00:08:23.494 0000:00:13.0: read successfully as expected 00:08:23.494 0000:00:10.0: read successfully as expected 00:08:23.494 0000:00:11.0: read successfully as expected 00:08:23.494 0000:00:12.0: read successfully as expected 00:08:23.494 Cleaning up... 00:08:23.494 00:08:23.494 real 0m0.255s 00:08:23.494 user 0m0.089s 00:08:23.494 sys 0m0.121s 00:08:23.494 ************************************ 00:08:23.494 END TEST nvme_err_injection 00:08:23.494 ************************************ 00:08:23.494 03:17:11 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.494 03:17:11 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:23.753 03:17:11 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:23.753 03:17:11 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:23.753 03:17:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.753 03:17:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.753 ************************************ 00:08:23.753 START TEST nvme_overhead 00:08:23.754 ************************************ 00:08:23.754 03:17:11 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.137 Initializing NVMe Controllers 00:08:25.137 Attached to 0000:00:13.0 00:08:25.137 Attached to 0000:00:10.0 00:08:25.137 Attached to 0000:00:11.0 00:08:25.137 Attached to 0000:00:12.0 00:08:25.137 Initialization complete. Launching workers. 00:08:25.137 submit (in ns) avg, min, max = 12116.2, 9964.6, 96603.8 00:08:25.137 complete (in ns) avg, min, max = 8087.7, 7180.0, 237931.5 00:08:25.137 00:08:25.137 Submit histogram 00:08:25.137 ================ 00:08:25.137 Range in us Cumulative Count 00:08:25.137 9.945 - 9.994: 0.0245% ( 1) 00:08:25.137 9.994 - 10.043: 0.0491% ( 1) 00:08:25.137 10.388 - 10.437: 0.0736% ( 1) 00:08:25.137 10.486 - 10.535: 0.1227% ( 2) 00:08:25.137 10.585 - 10.634: 0.1472% ( 1) 00:08:25.137 10.634 - 10.683: 0.1963% ( 2) 00:08:25.137 10.732 - 10.782: 0.2454% ( 2) 00:08:25.137 10.782 - 10.831: 0.3436% ( 4) 00:08:25.137 10.831 - 10.880: 0.4908% ( 6) 00:08:25.137 10.880 - 10.929: 0.5890% ( 4) 00:08:25.137 10.929 - 10.978: 0.9571% ( 15) 00:08:25.137 10.978 - 11.028: 1.5706% ( 25) 00:08:25.137 11.028 - 11.077: 2.8221% ( 51) 00:08:25.137 11.077 - 11.126: 4.8834% ( 84) 00:08:25.137 11.126 - 11.175: 7.9264% ( 124) 00:08:25.137 11.175 - 11.225: 12.5153% ( 187) 00:08:25.137 11.225 - 11.274: 19.0429% ( 266) 00:08:25.137 11.274 - 11.323: 27.2393% ( 334) 00:08:25.137 11.323 - 11.372: 36.5399% ( 379) 00:08:25.137 11.372 - 11.422: 47.0184% ( 427) 00:08:25.137 11.422 - 11.471: 56.8834% ( 402) 00:08:25.137 11.471 - 11.520: 64.5399% ( 312) 00:08:25.137 11.520 - 11.569: 70.1840% ( 230) 00:08:25.137 11.569 - 11.618: 74.6748% ( 183) 00:08:25.137 11.618 - 11.668: 77.3252% ( 108) 00:08:25.137 11.668 - 11.717: 79.3865% ( 84) 00:08:25.137 11.717 - 11.766: 81.2025% ( 74) 00:08:25.137 11.766 - 11.815: 82.2822% ( 44) 00:08:25.137 11.815 - 11.865: 83.4110% ( 46) 00:08:25.137 11.865 - 11.914: 84.4908% ( 44) 00:08:25.137 11.914 - 11.963: 85.4233% ( 38) 00:08:25.137 11.963 - 12.012: 86.1595% ( 30) 00:08:25.137 12.012 - 12.062: 86.8221% ( 27) 00:08:25.137 12.062 - 12.111: 87.4847% ( 27) 00:08:25.137 12.111 - 12.160: 87.9755% ( 20) 00:08:25.137 12.160 - 12.209: 88.3436% ( 15) 00:08:25.137 12.209 - 12.258: 88.6626% ( 13) 00:08:25.137 12.258 - 12.308: 88.9325% ( 11) 00:08:25.137 12.308 - 12.357: 89.0552% ( 5) 00:08:25.137 12.357 - 12.406: 89.1288% ( 3) 00:08:25.137 12.406 - 12.455: 89.3497% ( 9) 00:08:25.137 12.455 - 12.505: 89.4479% ( 4) 00:08:25.137 12.505 - 12.554: 89.5706% ( 5) 00:08:25.137 12.554 - 12.603: 89.6442% ( 3) 00:08:25.137 12.603 - 12.702: 89.7178% ( 3) 00:08:25.137 12.702 - 12.800: 89.8896% ( 7) 00:08:25.137 12.800 - 12.898: 89.9877% ( 4) 00:08:25.137 12.898 - 12.997: 90.1350% ( 6) 00:08:25.137 12.997 - 13.095: 90.2331% ( 4) 00:08:25.137 13.095 - 13.194: 90.3558% ( 5) 00:08:25.137 13.194 - 13.292: 90.4540% ( 4) 00:08:25.137 13.292 - 13.391: 90.4785% ( 1) 00:08:25.137 13.391 - 13.489: 90.6012% ( 5) 00:08:25.137 13.489 - 13.588: 90.7730% ( 7) 00:08:25.137 13.588 - 13.686: 90.9202% ( 6) 00:08:25.137 13.686 - 13.785: 91.0675% ( 6) 00:08:25.137 13.785 - 13.883: 91.2147% ( 6) 00:08:25.137 13.883 - 13.982: 91.4356% ( 9) 00:08:25.137 13.982 - 14.080: 91.5337% ( 4) 00:08:25.137 14.080 - 14.178: 91.6564% ( 5) 00:08:25.137 14.178 - 14.277: 91.9018% ( 10) 00:08:25.137 14.277 - 14.375: 92.2454% ( 14) 00:08:25.137 14.375 - 14.474: 92.5399% ( 12) 00:08:25.137 14.474 - 14.572: 92.6380% ( 4) 00:08:25.137 14.572 - 14.671: 92.7362% ( 4) 00:08:25.137 14.671 - 14.769: 92.9080% ( 7) 00:08:25.137 14.769 - 14.868: 93.0307% ( 5) 00:08:25.137 14.868 - 14.966: 93.1779% ( 6) 00:08:25.137 14.966 - 15.065: 93.2270% ( 2) 00:08:25.137 15.065 - 15.163: 93.3006% ( 3) 00:08:25.137 15.163 - 15.262: 93.5706% ( 11) 00:08:25.137 15.262 - 15.360: 93.6687% ( 4) 00:08:25.137 15.360 - 15.458: 93.7423% ( 3) 00:08:25.137 15.458 - 15.557: 93.8160% ( 3) 00:08:25.137 15.557 - 15.655: 93.9632% ( 6) 00:08:25.137 15.655 - 15.754: 94.1595% ( 8) 00:08:25.137 15.754 - 15.852: 94.3313% ( 7) 00:08:25.137 15.852 - 15.951: 94.4294% ( 4) 00:08:25.137 15.951 - 16.049: 94.5276% ( 4) 00:08:25.137 16.049 - 16.148: 94.6258% ( 4) 00:08:25.137 16.148 - 16.246: 94.7239% ( 4) 00:08:25.137 16.246 - 16.345: 94.8221% ( 4) 00:08:25.137 16.345 - 16.443: 94.8957% ( 3) 00:08:25.137 16.443 - 16.542: 95.0675% ( 7) 00:08:25.137 16.542 - 16.640: 95.2147% ( 6) 00:08:25.137 16.640 - 16.738: 95.4110% ( 8) 00:08:25.137 16.738 - 16.837: 95.5337% ( 5) 00:08:25.137 16.837 - 16.935: 95.6810% ( 6) 00:08:25.137 16.935 - 17.034: 95.8773% ( 8) 00:08:25.137 17.034 - 17.132: 96.0982% ( 9) 00:08:25.137 17.132 - 17.231: 96.2454% ( 6) 00:08:25.137 17.231 - 17.329: 96.3436% ( 4) 00:08:25.137 17.329 - 17.428: 96.6626% ( 13) 00:08:25.137 17.428 - 17.526: 96.7853% ( 5) 00:08:25.137 17.526 - 17.625: 96.9080% ( 5) 00:08:25.137 17.625 - 17.723: 97.0798% ( 7) 00:08:25.137 17.723 - 17.822: 97.1534% ( 3) 00:08:25.137 17.822 - 17.920: 97.2515% ( 4) 00:08:25.137 17.920 - 18.018: 97.3006% ( 2) 00:08:25.137 18.018 - 18.117: 97.3742% ( 3) 00:08:25.137 18.117 - 18.215: 97.4969% ( 5) 00:08:25.137 18.215 - 18.314: 97.5951% ( 4) 00:08:25.137 18.314 - 18.412: 97.6442% ( 2) 00:08:25.137 18.412 - 18.511: 97.7669% ( 5) 00:08:25.137 18.511 - 18.609: 97.8405% ( 3) 00:08:25.137 18.708 - 18.806: 97.9387% ( 4) 00:08:25.137 18.806 - 18.905: 98.0123% ( 3) 00:08:25.137 18.905 - 19.003: 98.0368% ( 1) 00:08:25.137 19.003 - 19.102: 98.1595% ( 5) 00:08:25.137 19.102 - 19.200: 98.2822% ( 5) 00:08:25.137 19.200 - 19.298: 98.3067% ( 1) 00:08:25.137 19.298 - 19.397: 98.3804% ( 3) 00:08:25.137 19.397 - 19.495: 98.4540% ( 3) 00:08:25.137 19.495 - 19.594: 98.5276% ( 3) 00:08:25.137 19.791 - 19.889: 98.5767% ( 2) 00:08:25.137 19.889 - 19.988: 98.6012% ( 1) 00:08:25.137 19.988 - 20.086: 98.6258% ( 1) 00:08:25.137 20.086 - 20.185: 98.6503% ( 1) 00:08:25.137 20.382 - 20.480: 98.7239% ( 3) 00:08:25.137 20.480 - 20.578: 98.7485% ( 1) 00:08:25.137 20.578 - 20.677: 98.8466% ( 4) 00:08:25.137 20.677 - 20.775: 98.8957% ( 2) 00:08:25.137 20.775 - 20.874: 98.9202% ( 1) 00:08:25.137 20.972 - 21.071: 98.9448% ( 1) 00:08:25.137 21.071 - 21.169: 98.9693% ( 1) 00:08:25.137 21.662 - 21.760: 98.9939% ( 1) 00:08:25.137 21.760 - 21.858: 99.0184% ( 1) 00:08:25.137 22.745 - 22.843: 99.0429% ( 1) 00:08:25.137 22.843 - 22.942: 99.0675% ( 1) 00:08:25.137 24.517 - 24.615: 99.0920% ( 1) 00:08:25.137 24.714 - 24.812: 99.1411% ( 2) 00:08:25.137 25.206 - 25.403: 99.1656% ( 1) 00:08:25.137 27.569 - 27.766: 99.1902% ( 1) 00:08:25.137 28.554 - 28.751: 99.2147% ( 1) 00:08:25.137 30.523 - 30.720: 99.2393% ( 1) 00:08:25.137 30.720 - 30.917: 99.2883% ( 2) 00:08:25.137 30.917 - 31.114: 99.3620% ( 3) 00:08:25.137 31.114 - 31.311: 99.4356% ( 3) 00:08:25.137 31.311 - 31.508: 99.5583% ( 5) 00:08:25.137 31.508 - 31.705: 99.6074% ( 2) 00:08:25.137 31.705 - 31.902: 99.7055% ( 4) 00:08:25.137 32.098 - 32.295: 99.7301% ( 1) 00:08:25.137 32.295 - 32.492: 99.7791% ( 2) 00:08:25.137 32.492 - 32.689: 99.8037% ( 1) 00:08:25.137 32.689 - 32.886: 99.8282% ( 1) 00:08:25.137 38.597 - 38.794: 99.8528% ( 1) 00:08:25.137 38.794 - 38.991: 99.8773% ( 1) 00:08:25.137 52.775 - 53.169: 99.9018% ( 1) 00:08:25.137 53.169 - 53.563: 99.9264% ( 1) 00:08:25.137 70.892 - 71.286: 99.9509% ( 1) 00:08:25.137 93.735 - 94.129: 99.9755% ( 1) 00:08:25.137 96.492 - 96.886: 100.0000% ( 1) 00:08:25.137 00:08:25.137 Complete histogram 00:08:25.137 ================== 00:08:25.137 Range in us Cumulative Count 00:08:25.137 7.138 - 7.188: 0.0245% ( 1) 00:08:25.137 7.188 - 7.237: 0.0491% ( 1) 00:08:25.137 7.237 - 7.286: 0.5890% ( 22) 00:08:25.137 7.286 - 7.335: 2.5767% ( 81) 00:08:25.137 7.335 - 7.385: 7.8037% ( 213) 00:08:25.138 7.385 - 7.434: 16.2699% ( 345) 00:08:25.138 7.434 - 7.483: 27.7055% ( 466) 00:08:25.138 7.483 - 7.532: 39.0184% ( 461) 00:08:25.138 7.532 - 7.582: 47.0429% ( 327) 00:08:25.138 7.582 - 7.631: 53.8405% ( 277) 00:08:25.138 7.631 - 7.680: 58.7239% ( 199) 00:08:25.138 7.680 - 7.729: 62.0123% ( 134) 00:08:25.138 7.729 - 7.778: 63.9755% ( 80) 00:08:25.138 7.778 - 7.828: 65.3497% ( 56) 00:08:25.138 7.828 - 7.877: 66.8221% ( 60) 00:08:25.138 7.877 - 7.926: 70.6994% ( 158) 00:08:25.138 7.926 - 7.975: 75.9264% ( 213) 00:08:25.138 7.975 - 8.025: 79.8037% ( 158) 00:08:25.138 8.025 - 8.074: 82.9202% ( 127) 00:08:25.138 8.074 - 8.123: 86.4785% ( 145) 00:08:25.138 8.123 - 8.172: 89.6442% ( 129) 00:08:25.138 8.172 - 8.222: 91.6074% ( 80) 00:08:25.138 8.222 - 8.271: 93.0061% ( 57) 00:08:25.138 8.271 - 8.320: 94.0613% ( 43) 00:08:25.138 8.320 - 8.369: 94.7485% ( 28) 00:08:25.138 8.369 - 8.418: 95.4356% ( 28) 00:08:25.138 8.418 - 8.468: 95.7055% ( 11) 00:08:25.138 8.468 - 8.517: 95.9264% ( 9) 00:08:25.138 8.517 - 8.566: 96.0491% ( 5) 00:08:25.138 8.566 - 8.615: 96.1472% ( 4) 00:08:25.138 8.615 - 8.665: 96.1718% ( 1) 00:08:25.138 8.665 - 8.714: 96.2209% ( 2) 00:08:25.138 8.714 - 8.763: 96.2454% ( 1) 00:08:25.138 8.763 - 8.812: 96.2699% ( 1) 00:08:25.138 8.812 - 8.862: 96.3926% ( 5) 00:08:25.138 8.862 - 8.911: 96.4172% ( 1) 00:08:25.138 8.911 - 8.960: 96.4417% ( 1) 00:08:25.138 8.960 - 9.009: 96.4908% ( 2) 00:08:25.138 9.009 - 9.058: 96.5399% ( 2) 00:08:25.138 9.058 - 9.108: 96.6135% ( 3) 00:08:25.138 9.108 - 9.157: 96.6871% ( 3) 00:08:25.138 9.157 - 9.206: 96.7117% ( 1) 00:08:25.138 9.206 - 9.255: 96.7362% ( 1) 00:08:25.138 9.255 - 9.305: 96.7607% ( 1) 00:08:25.138 9.305 - 9.354: 96.7853% ( 1) 00:08:25.138 9.354 - 9.403: 96.8098% ( 1) 00:08:25.138 9.403 - 9.452: 96.8589% ( 2) 00:08:25.138 9.551 - 9.600: 96.8834% ( 1) 00:08:25.138 9.797 - 9.846: 96.9080% ( 1) 00:08:25.138 9.945 - 9.994: 96.9325% ( 1) 00:08:25.138 9.994 - 10.043: 96.9571% ( 1) 00:08:25.138 10.043 - 10.092: 96.9816% ( 1) 00:08:25.138 10.191 - 10.240: 97.0061% ( 1) 00:08:25.138 10.338 - 10.388: 97.0307% ( 1) 00:08:25.138 10.388 - 10.437: 97.0552% ( 1) 00:08:25.138 10.486 - 10.535: 97.0798% ( 1) 00:08:25.138 10.585 - 10.634: 97.1043% ( 1) 00:08:25.138 10.634 - 10.683: 97.1288% ( 1) 00:08:25.138 10.782 - 10.831: 97.1534% ( 1) 00:08:25.138 10.880 - 10.929: 97.1779% ( 1) 00:08:25.138 10.978 - 11.028: 97.2025% ( 1) 00:08:25.138 11.028 - 11.077: 97.2270% ( 1) 00:08:25.138 11.077 - 11.126: 97.2515% ( 1) 00:08:25.138 11.175 - 11.225: 97.3006% ( 2) 00:08:25.138 11.323 - 11.372: 97.3252% ( 1) 00:08:25.138 11.372 - 11.422: 97.3742% ( 2) 00:08:25.138 11.422 - 11.471: 97.4233% ( 2) 00:08:25.138 11.618 - 11.668: 97.4724% ( 2) 00:08:25.138 11.668 - 11.717: 97.4969% ( 1) 00:08:25.138 11.717 - 11.766: 97.5215% ( 1) 00:08:25.138 11.865 - 11.914: 97.5460% ( 1) 00:08:25.138 12.012 - 12.062: 97.6196% ( 3) 00:08:25.138 12.258 - 12.308: 97.6442% ( 1) 00:08:25.138 12.357 - 12.406: 97.6933% ( 2) 00:08:25.138 12.455 - 12.505: 97.7178% ( 1) 00:08:25.138 12.505 - 12.554: 97.7423% ( 1) 00:08:25.138 12.702 - 12.800: 97.7914% ( 2) 00:08:25.138 12.898 - 12.997: 97.8160% ( 1) 00:08:25.138 12.997 - 13.095: 97.8896% ( 3) 00:08:25.138 13.095 - 13.194: 97.9877% ( 4) 00:08:25.138 13.194 - 13.292: 98.0123% ( 1) 00:08:25.138 13.292 - 13.391: 98.0613% ( 2) 00:08:25.138 13.391 - 13.489: 98.1350% ( 3) 00:08:25.138 13.489 - 13.588: 98.1840% ( 2) 00:08:25.138 13.686 - 13.785: 98.2822% ( 4) 00:08:25.138 13.785 - 13.883: 98.3804% ( 4) 00:08:25.138 13.883 - 13.982: 98.4049% ( 1) 00:08:25.138 13.982 - 14.080: 98.4785% ( 3) 00:08:25.138 14.080 - 14.178: 98.5767% ( 4) 00:08:25.138 14.178 - 14.277: 98.6258% ( 2) 00:08:25.138 14.277 - 14.375: 98.6503% ( 1) 00:08:25.138 14.375 - 14.474: 98.6994% ( 2) 00:08:25.138 14.474 - 14.572: 98.7239% ( 1) 00:08:25.138 14.572 - 14.671: 98.7485% ( 1) 00:08:25.138 15.754 - 15.852: 98.7730% ( 1) 00:08:25.138 15.951 - 16.049: 98.7975% ( 1) 00:08:25.138 16.246 - 16.345: 98.8221% ( 1) 00:08:25.138 18.215 - 18.314: 98.8466% ( 1) 00:08:25.138 18.412 - 18.511: 98.8712% ( 1) 00:08:25.138 19.397 - 19.495: 98.9202% ( 2) 00:08:25.138 19.594 - 19.692: 98.9448% ( 1) 00:08:25.138 19.692 - 19.791: 98.9693% ( 1) 00:08:25.138 19.988 - 20.086: 98.9939% ( 1) 00:08:25.138 20.185 - 20.283: 99.0184% ( 1) 00:08:25.138 20.480 - 20.578: 99.0429% ( 1) 00:08:25.138 21.268 - 21.366: 99.0675% ( 1) 00:08:25.138 21.465 - 21.563: 99.0920% ( 1) 00:08:25.138 22.154 - 22.252: 99.1656% ( 3) 00:08:25.138 22.252 - 22.351: 99.2147% ( 2) 00:08:25.138 22.351 - 22.449: 99.2883% ( 3) 00:08:25.138 22.449 - 22.548: 99.3129% ( 1) 00:08:25.138 22.548 - 22.646: 99.4356% ( 5) 00:08:25.138 22.646 - 22.745: 99.5092% ( 3) 00:08:25.138 22.745 - 22.843: 99.5337% ( 1) 00:08:25.138 22.942 - 23.040: 99.5583% ( 1) 00:08:25.138 23.138 - 23.237: 99.5828% ( 1) 00:08:25.138 23.237 - 23.335: 99.6074% ( 1) 00:08:25.138 23.729 - 23.828: 99.6319% ( 1) 00:08:25.138 25.108 - 25.206: 99.6564% ( 1) 00:08:25.138 27.372 - 27.569: 99.6810% ( 1) 00:08:25.138 32.689 - 32.886: 99.7055% ( 1) 00:08:25.138 33.280 - 33.477: 99.7301% ( 1) 00:08:25.138 33.871 - 34.068: 99.7546% ( 1) 00:08:25.138 35.249 - 35.446: 99.7791% ( 1) 00:08:25.138 37.218 - 37.415: 99.8037% ( 1) 00:08:25.138 38.203 - 38.400: 99.8282% ( 1) 00:08:25.138 38.400 - 38.597: 99.8528% ( 1) 00:08:25.138 44.898 - 45.095: 99.8773% ( 1) 00:08:25.138 49.231 - 49.428: 99.9018% ( 1) 00:08:25.138 53.563 - 53.957: 99.9264% ( 1) 00:08:25.138 56.320 - 56.714: 99.9509% ( 1) 00:08:25.138 83.102 - 83.495: 99.9755% ( 1) 00:08:25.138 237.883 - 239.458: 100.0000% ( 1) 00:08:25.138 00:08:25.138 00:08:25.138 real 0m1.220s 00:08:25.138 user 0m1.070s 00:08:25.138 sys 0m0.095s 00:08:25.138 ************************************ 00:08:25.138 END TEST nvme_overhead 00:08:25.138 ************************************ 00:08:25.138 03:17:12 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.138 03:17:12 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:25.138 03:17:12 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:25.138 03:17:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:25.138 03:17:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.138 03:17:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.138 ************************************ 00:08:25.138 START TEST nvme_arbitration 00:08:25.138 ************************************ 00:08:25.138 03:17:12 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:28.476 Initializing NVMe Controllers 00:08:28.476 Attached to 0000:00:13.0 00:08:28.476 Attached to 0000:00:10.0 00:08:28.476 Attached to 0000:00:11.0 00:08:28.476 Attached to 0000:00:12.0 00:08:28.476 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:28.476 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:28.476 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:28.476 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:28.476 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:28.476 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:28.476 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:28.476 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:28.476 Initialization complete. Launching workers. 00:08:28.476 Starting thread on core 1 with urgent priority queue 00:08:28.476 Starting thread on core 2 with urgent priority queue 00:08:28.476 Starting thread on core 3 with urgent priority queue 00:08:28.476 Starting thread on core 0 with urgent priority queue 00:08:28.476 QEMU NVMe Ctrl (12343 ) core 0: 4309.33 IO/s 23.21 secs/100000 ios 00:08:28.476 QEMU NVMe Ctrl (12342 ) core 0: 4309.33 IO/s 23.21 secs/100000 ios 00:08:28.476 QEMU NVMe Ctrl (12340 ) core 1: 4352.00 IO/s 22.98 secs/100000 ios 00:08:28.476 QEMU NVMe Ctrl (12342 ) core 1: 4352.00 IO/s 22.98 secs/100000 ios 00:08:28.476 QEMU NVMe Ctrl (12341 ) core 2: 3882.67 IO/s 25.76 secs/100000 ios 00:08:28.476 QEMU NVMe Ctrl (12342 ) core 3: 3882.67 IO/s 25.76 secs/100000 ios 00:08:28.476 ======================================================== 00:08:28.476 00:08:28.476 00:08:28.477 real 0m3.279s 00:08:28.477 user 0m9.030s 00:08:28.477 sys 0m0.145s 00:08:28.477 03:17:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.477 ************************************ 00:08:28.477 END TEST nvme_arbitration 00:08:28.477 ************************************ 00:08:28.477 03:17:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:28.477 03:17:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.477 ************************************ 00:08:28.477 START TEST nvme_single_aen 00:08:28.477 ************************************ 00:08:28.477 03:17:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:28.477 Asynchronous Event Request test 00:08:28.477 Attached to 0000:00:13.0 00:08:28.477 Attached to 0000:00:10.0 00:08:28.477 Attached to 0000:00:11.0 00:08:28.477 Attached to 0000:00:12.0 00:08:28.477 Reset controller to setup AER completions for this process 00:08:28.477 Registering asynchronous event callbacks... 00:08:28.477 Getting orig temperature thresholds of all controllers 00:08:28.477 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.477 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.477 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.477 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.477 Setting all controllers temperature threshold low to trigger AER 00:08:28.477 Waiting for all controllers temperature threshold to be set lower 00:08:28.477 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.477 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:28.477 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.477 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:28.477 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.477 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:28.477 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.477 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:28.477 Waiting for all controllers to trigger AER and reset threshold 00:08:28.477 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.477 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.477 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.477 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.477 Cleaning up... 00:08:28.477 00:08:28.477 real 0m0.239s 00:08:28.477 user 0m0.091s 00:08:28.477 sys 0m0.098s 00:08:28.477 03:17:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.477 ************************************ 00:08:28.477 END TEST nvme_single_aen 00:08:28.477 ************************************ 00:08:28.477 03:17:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:28.477 03:17:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.477 03:17:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.477 ************************************ 00:08:28.477 START TEST nvme_doorbell_aers 00:08:28.477 ************************************ 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:28.477 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.763 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:28.763 03:17:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:28.763 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.763 03:17:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.763 [2024-11-21 03:17:16.297730] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:08:38.765 Executing: test_write_invalid_db 00:08:38.765 Waiting for AER completion... 00:08:38.765 Failure: test_write_invalid_db 00:08:38.765 00:08:38.765 Executing: test_invalid_db_write_overflow_sq 00:08:38.765 Waiting for AER completion... 00:08:38.765 Failure: test_invalid_db_write_overflow_sq 00:08:38.765 00:08:38.765 Executing: test_invalid_db_write_overflow_cq 00:08:38.765 Waiting for AER completion... 00:08:38.765 Failure: test_invalid_db_write_overflow_cq 00:08:38.765 00:08:38.765 03:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:38.765 03:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:39.026 [2024-11-21 03:17:26.341642] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:08:49.024 Executing: test_write_invalid_db 00:08:49.024 Waiting for AER completion... 00:08:49.024 Failure: test_write_invalid_db 00:08:49.024 00:08:49.024 Executing: test_invalid_db_write_overflow_sq 00:08:49.024 Waiting for AER completion... 00:08:49.024 Failure: test_invalid_db_write_overflow_sq 00:08:49.024 00:08:49.024 Executing: test_invalid_db_write_overflow_cq 00:08:49.024 Waiting for AER completion... 00:08:49.025 Failure: test_invalid_db_write_overflow_cq 00:08:49.025 00:08:49.025 03:17:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:49.025 03:17:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:49.025 [2024-11-21 03:17:36.362655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:08:59.072 Executing: test_write_invalid_db 00:08:59.072 Waiting for AER completion... 00:08:59.072 Failure: test_write_invalid_db 00:08:59.072 00:08:59.072 Executing: test_invalid_db_write_overflow_sq 00:08:59.072 Waiting for AER completion... 00:08:59.072 Failure: test_invalid_db_write_overflow_sq 00:08:59.072 00:08:59.072 Executing: test_invalid_db_write_overflow_cq 00:08:59.072 Waiting for AER completion... 00:08:59.072 Failure: test_invalid_db_write_overflow_cq 00:08:59.072 00:08:59.072 03:17:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:59.072 03:17:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:59.072 [2024-11-21 03:17:46.393751] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 Executing: test_write_invalid_db 00:09:09.048 Waiting for AER completion... 00:09:09.048 Failure: test_write_invalid_db 00:09:09.048 00:09:09.048 Executing: test_invalid_db_write_overflow_sq 00:09:09.048 Waiting for AER completion... 00:09:09.048 Failure: test_invalid_db_write_overflow_sq 00:09:09.048 00:09:09.048 Executing: test_invalid_db_write_overflow_cq 00:09:09.048 Waiting for AER completion... 00:09:09.048 Failure: test_invalid_db_write_overflow_cq 00:09:09.048 00:09:09.048 00:09:09.048 real 0m40.207s 00:09:09.048 user 0m34.180s 00:09:09.048 sys 0m5.597s 00:09:09.048 03:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.048 ************************************ 00:09:09.048 END TEST nvme_doorbell_aers 00:09:09.048 ************************************ 00:09:09.048 03:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:09.048 03:17:56 nvme -- nvme/nvme.sh@97 -- # uname 00:09:09.048 03:17:56 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:09.048 03:17:56 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.048 03:17:56 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:09.048 03:17:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.048 03:17:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.048 ************************************ 00:09:09.048 START TEST nvme_multi_aen 00:09:09.048 ************************************ 00:09:09.048 03:17:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.048 [2024-11-21 03:17:56.432919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.432998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.433011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.434177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.434204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.434213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.435226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.435251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.435260] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.436206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.436232] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 [2024-11-21 03:17:56.436240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76544) is not found. Dropping the request. 00:09:09.048 Child process pid: 77064 00:09:09.307 [Child] Asynchronous Event Request test 00:09:09.307 [Child] Attached to 0000:00:13.0 00:09:09.307 [Child] Attached to 0000:00:10.0 00:09:09.307 [Child] Attached to 0000:00:11.0 00:09:09.307 [Child] Attached to 0000:00:12.0 00:09:09.307 [Child] Registering asynchronous event callbacks... 00:09:09.307 [Child] Getting orig temperature thresholds of all controllers 00:09:09.307 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:09.307 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 [Child] Cleaning up... 00:09:09.307 Asynchronous Event Request test 00:09:09.307 Attached to 0000:00:13.0 00:09:09.307 Attached to 0000:00:10.0 00:09:09.307 Attached to 0000:00:11.0 00:09:09.307 Attached to 0000:00:12.0 00:09:09.307 Reset controller to setup AER completions for this process 00:09:09.307 Registering asynchronous event callbacks... 00:09:09.307 Getting orig temperature thresholds of all controllers 00:09:09.307 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.307 Setting all controllers temperature threshold low to trigger AER 00:09:09.307 Waiting for all controllers temperature threshold to be set lower 00:09:09.307 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:09.307 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:09.307 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:09.307 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.307 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:09.307 Waiting for all controllers to trigger AER and reset threshold 00:09:09.307 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.307 Cleaning up... 00:09:09.307 00:09:09.307 real 0m0.402s 00:09:09.307 user 0m0.124s 00:09:09.307 sys 0m0.185s 00:09:09.307 03:17:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.307 03:17:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:09.307 ************************************ 00:09:09.307 END TEST nvme_multi_aen 00:09:09.307 ************************************ 00:09:09.307 03:17:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:09.307 03:17:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:09.307 03:17:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.307 03:17:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.307 ************************************ 00:09:09.307 START TEST nvme_startup 00:09:09.307 ************************************ 00:09:09.307 03:17:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:09.565 Initializing NVMe Controllers 00:09:09.565 Attached to 0000:00:13.0 00:09:09.565 Attached to 0000:00:10.0 00:09:09.565 Attached to 0000:00:11.0 00:09:09.565 Attached to 0000:00:12.0 00:09:09.565 Initialization complete. 00:09:09.565 Time used:138111.938 (us). 00:09:09.565 00:09:09.565 real 0m0.193s 00:09:09.565 user 0m0.059s 00:09:09.565 sys 0m0.092s 00:09:09.565 03:17:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.565 ************************************ 00:09:09.565 END TEST nvme_startup 00:09:09.565 ************************************ 00:09:09.565 03:17:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 03:17:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:09.565 03:17:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.565 03:17:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.565 03:17:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 ************************************ 00:09:09.565 START TEST nvme_multi_secondary 00:09:09.565 ************************************ 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77115 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77116 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:09.565 03:17:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:12.847 Initializing NVMe Controllers 00:09:12.847 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:12.847 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:12.847 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:12.847 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:12.847 Initialization complete. Launching workers. 00:09:12.847 ======================================================== 00:09:12.847 Latency(us) 00:09:12.847 Device Information : IOPS MiB/s Average min max 00:09:12.847 PCIE (0000:00:13.0) NSID 1 from core 1: 7714.49 30.13 2073.59 1012.68 5821.01 00:09:12.847 PCIE (0000:00:10.0) NSID 1 from core 1: 7714.49 30.13 2072.70 1024.44 5896.48 00:09:12.847 PCIE (0000:00:11.0) NSID 1 from core 1: 7714.49 30.13 2073.63 1069.09 5741.14 00:09:12.847 PCIE (0000:00:12.0) NSID 1 from core 1: 7714.49 30.13 2073.61 1043.53 5372.97 00:09:12.847 PCIE (0000:00:12.0) NSID 2 from core 1: 7714.49 30.13 2073.59 1031.40 5309.10 00:09:12.847 PCIE (0000:00:12.0) NSID 3 from core 1: 7714.49 30.13 2073.58 939.07 5630.44 00:09:12.847 ======================================================== 00:09:12.847 Total : 46286.92 180.81 2073.45 939.07 5896.48 00:09:12.847 00:09:12.847 Initializing NVMe Controllers 00:09:12.847 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:12.847 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:12.847 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:12.847 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:12.847 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:12.847 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:12.847 Initialization complete. Launching workers. 00:09:12.847 ======================================================== 00:09:12.847 Latency(us) 00:09:12.847 Device Information : IOPS MiB/s Average min max 00:09:12.847 PCIE (0000:00:13.0) NSID 1 from core 2: 3231.98 12.62 4950.09 1354.13 14637.21 00:09:12.847 PCIE (0000:00:10.0) NSID 1 from core 2: 3231.98 12.62 4949.03 1205.27 15787.13 00:09:12.847 PCIE (0000:00:11.0) NSID 1 from core 2: 3231.98 12.62 4950.28 1259.08 15425.07 00:09:12.847 PCIE (0000:00:12.0) NSID 1 from core 2: 3231.98 12.62 4950.28 1216.97 14147.08 00:09:12.847 PCIE (0000:00:12.0) NSID 2 from core 2: 3231.98 12.62 4950.27 981.91 13536.70 00:09:12.847 PCIE (0000:00:12.0) NSID 3 from core 2: 3231.98 12.62 4950.23 840.70 13600.96 00:09:12.847 ======================================================== 00:09:12.847 Total : 19391.91 75.75 4950.03 840.70 15787.13 00:09:12.847 00:09:12.847 03:18:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77115 00:09:14.761 Initializing NVMe Controllers 00:09:14.761 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.761 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.761 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.761 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.761 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:14.761 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:14.761 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:14.761 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:14.761 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:14.761 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:14.761 Initialization complete. Launching workers. 00:09:14.761 ======================================================== 00:09:14.761 Latency(us) 00:09:14.761 Device Information : IOPS MiB/s Average min max 00:09:14.761 PCIE (0000:00:13.0) NSID 1 from core 0: 10676.94 41.71 1498.19 700.00 5677.72 00:09:14.761 PCIE (0000:00:10.0) NSID 1 from core 0: 10676.94 41.71 1497.33 682.76 5971.96 00:09:14.761 PCIE (0000:00:11.0) NSID 1 from core 0: 10676.94 41.71 1498.17 702.30 6316.02 00:09:14.761 PCIE (0000:00:12.0) NSID 1 from core 0: 10676.94 41.71 1498.15 690.69 6078.02 00:09:14.761 PCIE (0000:00:12.0) NSID 2 from core 0: 10676.94 41.71 1498.14 556.14 5918.26 00:09:14.761 PCIE (0000:00:12.0) NSID 3 from core 0: 10676.94 41.71 1498.11 481.65 5840.44 00:09:14.761 ======================================================== 00:09:14.761 Total : 64061.66 250.24 1498.02 481.65 6316.02 00:09:14.761 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77116 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77185 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77186 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:14.761 03:18:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:18.050 Initializing NVMe Controllers 00:09:18.050 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.050 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:18.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:18.050 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:18.050 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:18.050 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:18.050 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:18.050 Initialization complete. Launching workers. 00:09:18.050 ======================================================== 00:09:18.050 Latency(us) 00:09:18.050 Device Information : IOPS MiB/s Average min max 00:09:18.050 PCIE (0000:00:13.0) NSID 1 from core 0: 7720.88 30.16 2071.89 737.37 7744.62 00:09:18.050 PCIE (0000:00:10.0) NSID 1 from core 0: 7720.88 30.16 2070.95 722.95 7197.11 00:09:18.050 PCIE (0000:00:11.0) NSID 1 from core 0: 7720.88 30.16 2072.00 728.46 6998.52 00:09:18.050 PCIE (0000:00:12.0) NSID 1 from core 0: 7720.88 30.16 2071.99 743.24 6680.51 00:09:18.050 PCIE (0000:00:12.0) NSID 2 from core 0: 7720.88 30.16 2072.00 739.40 6851.84 00:09:18.050 PCIE (0000:00:12.0) NSID 3 from core 0: 7720.88 30.16 2071.99 742.33 7337.05 00:09:18.050 ======================================================== 00:09:18.050 Total : 46325.25 180.96 2071.80 722.95 7744.62 00:09:18.050 00:09:18.050 Initializing NVMe Controllers 00:09:18.050 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.050 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.050 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:18.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:18.050 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:18.050 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:18.051 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:18.051 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:18.051 Initialization complete. Launching workers. 00:09:18.051 ======================================================== 00:09:18.051 Latency(us) 00:09:18.051 Device Information : IOPS MiB/s Average min max 00:09:18.051 PCIE (0000:00:13.0) NSID 1 from core 1: 7704.38 30.10 2076.29 708.64 5670.70 00:09:18.051 PCIE (0000:00:10.0) NSID 1 from core 1: 7704.38 30.10 2075.32 703.97 5679.89 00:09:18.051 PCIE (0000:00:11.0) NSID 1 from core 1: 7704.38 30.10 2076.29 731.19 5601.65 00:09:18.051 PCIE (0000:00:12.0) NSID 1 from core 1: 7704.38 30.10 2076.26 724.80 5941.08 00:09:18.051 PCIE (0000:00:12.0) NSID 2 from core 1: 7704.38 30.10 2076.31 714.32 5898.19 00:09:18.051 PCIE (0000:00:12.0) NSID 3 from core 1: 7704.38 30.10 2076.29 713.68 5782.33 00:09:18.051 ======================================================== 00:09:18.051 Total : 46226.29 180.57 2076.13 703.97 5941.08 00:09:18.051 00:09:20.580 Initializing NVMe Controllers 00:09:20.580 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.580 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.580 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.580 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.580 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:20.580 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:20.580 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:20.580 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:20.580 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:20.580 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:20.580 Initialization complete. Launching workers. 00:09:20.580 ======================================================== 00:09:20.580 Latency(us) 00:09:20.580 Device Information : IOPS MiB/s Average min max 00:09:20.580 PCIE (0000:00:13.0) NSID 1 from core 2: 4726.46 18.46 3384.65 764.32 15768.63 00:09:20.580 PCIE (0000:00:10.0) NSID 1 from core 2: 4726.46 18.46 3383.63 733.89 15453.78 00:09:20.580 PCIE (0000:00:11.0) NSID 1 from core 2: 4726.46 18.46 3384.54 674.37 12913.38 00:09:20.580 PCIE (0000:00:12.0) NSID 1 from core 2: 4726.46 18.46 3384.62 604.65 12997.51 00:09:20.580 PCIE (0000:00:12.0) NSID 2 from core 2: 4726.46 18.46 3384.20 514.53 12712.14 00:09:20.580 PCIE (0000:00:12.0) NSID 3 from core 2: 4726.46 18.46 3384.28 422.77 13261.03 00:09:20.580 ======================================================== 00:09:20.580 Total : 28358.79 110.78 3384.32 422.77 15768.63 00:09:20.580 00:09:20.580 ************************************ 00:09:20.580 END TEST nvme_multi_secondary 00:09:20.580 ************************************ 00:09:20.580 03:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77185 00:09:20.580 03:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77186 00:09:20.580 00:09:20.580 real 0m10.645s 00:09:20.580 user 0m18.358s 00:09:20.580 sys 0m0.596s 00:09:20.580 03:18:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.580 03:18:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:20.580 03:18:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:20.580 03:18:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:20.580 03:18:07 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76136 ]] 00:09:20.580 03:18:07 nvme -- common/autotest_common.sh@1094 -- # kill 76136 00:09:20.580 03:18:07 nvme -- common/autotest_common.sh@1095 -- # wait 76136 00:09:20.581 [2024-11-21 03:18:07.596716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.596794] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.596821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.596842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.597537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.597596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.597618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.597636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.598284] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.598336] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.598366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.598385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.599036] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.599087] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.599109] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 [2024-11-21 03:18:07.599126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77063) is not found. Dropping the request. 00:09:20.581 03:18:07 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:20.581 03:18:07 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:20.581 03:18:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:20.581 03:18:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.581 03:18:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.581 03:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 ************************************ 00:09:20.581 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:20.581 ************************************ 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:20.581 * Looking for test storage... 00:09:20.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.581 --rc genhtml_branch_coverage=1 00:09:20.581 --rc genhtml_function_coverage=1 00:09:20.581 --rc genhtml_legend=1 00:09:20.581 --rc geninfo_all_blocks=1 00:09:20.581 --rc geninfo_unexecuted_blocks=1 00:09:20.581 00:09:20.581 ' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.581 --rc genhtml_branch_coverage=1 00:09:20.581 --rc genhtml_function_coverage=1 00:09:20.581 --rc genhtml_legend=1 00:09:20.581 --rc geninfo_all_blocks=1 00:09:20.581 --rc geninfo_unexecuted_blocks=1 00:09:20.581 00:09:20.581 ' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.581 --rc genhtml_branch_coverage=1 00:09:20.581 --rc genhtml_function_coverage=1 00:09:20.581 --rc genhtml_legend=1 00:09:20.581 --rc geninfo_all_blocks=1 00:09:20.581 --rc geninfo_unexecuted_blocks=1 00:09:20.581 00:09:20.581 ' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.581 --rc genhtml_branch_coverage=1 00:09:20.581 --rc genhtml_function_coverage=1 00:09:20.581 --rc genhtml_legend=1 00:09:20.581 --rc geninfo_all_blocks=1 00:09:20.581 --rc geninfo_unexecuted_blocks=1 00:09:20.581 00:09:20.581 ' 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:20.581 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77354 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77354 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77354 ']' 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.582 03:18:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:20.582 [2024-11-21 03:18:07.954462] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:20.582 [2024-11-21 03:18:07.954559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77354 ] 00:09:20.582 [2024-11-21 03:18:08.089079] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.582 [2024-11-21 03:18:08.118455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.840 [2024-11-21 03:18:08.140773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.840 [2024-11-21 03:18:08.141057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.840 [2024-11-21 03:18:08.141434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.840 [2024-11-21 03:18:08.141477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.406 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.407 nvme0n1 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_UTaBr.txt 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.407 true 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732159088 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77377 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:21.407 03:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:23.943 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:23.943 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.943 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.943 [2024-11-21 03:18:10.877644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:23.943 [2024-11-21 03:18:10.878259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:23.944 [2024-11-21 03:18:10.878301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:23.944 [2024-11-21 03:18:10.878315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.944 [2024-11-21 03:18:10.880162] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.944 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77377 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77377 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77377 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_UTaBr.txt 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_UTaBr.txt 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77354 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77354 ']' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77354 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77354 00:09:23.944 killing process with pid 77354 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77354' 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77354 00:09:23.944 03:18:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77354 00:09:23.944 03:18:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:23.944 03:18:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:23.944 00:09:23.944 real 0m3.572s 00:09:23.944 user 0m12.778s 00:09:23.944 sys 0m0.432s 00:09:23.944 03:18:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.944 ************************************ 00:09:23.944 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:23.944 ************************************ 00:09:23.944 03:18:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.944 03:18:11 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:23.944 03:18:11 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:23.944 03:18:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.944 03:18:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.944 03:18:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.944 ************************************ 00:09:23.944 START TEST nvme_fio 00:09:23.944 ************************************ 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:23.944 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:23.944 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:24.203 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:24.203 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:24.462 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.462 03:18:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:24.462 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.463 03:18:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.463 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.463 fio-3.35 00:09:24.463 Starting 1 thread 00:09:32.578 00:09:32.578 test: (groupid=0, jobs=1): err= 0: pid=77501: Thu Nov 21 03:18:19 2024 00:09:32.578 read: IOPS=24.3k, BW=94.8MiB/s (99.4MB/s)(190MiB/2001msec) 00:09:32.578 slat (usec): min=3, max=203, avg= 4.94, stdev= 2.26 00:09:32.578 clat (usec): min=200, max=10465, avg=2633.75, stdev=791.36 00:09:32.578 lat (usec): min=205, max=10540, avg=2638.68, stdev=792.72 00:09:32.578 clat percentiles (usec): 00:09:32.578 | 1.00th=[ 1598], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:32.578 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:32.578 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3032], 95.00th=[ 4555], 00:09:32.578 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 8094], 99.95th=[ 8848], 00:09:32.578 | 99.99th=[10159] 00:09:32.578 bw ( KiB/s): min=95976, max=99424, per=100.00%, avg=97834.67, stdev=1739.71, samples=3 00:09:32.578 iops : min=23994, max=24856, avg=24458.67, stdev=434.93, samples=3 00:09:32.578 write: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(189MiB/2001msec); 0 zone resets 00:09:32.578 slat (usec): min=3, max=223, avg= 5.22, stdev= 2.55 00:09:32.578 clat (usec): min=242, max=10370, avg=2636.39, stdev=796.48 00:09:32.578 lat (usec): min=247, max=10403, avg=2641.62, stdev=797.82 00:09:32.578 clat percentiles (usec): 00:09:32.578 | 1.00th=[ 1582], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:32.578 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:32.578 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3032], 95.00th=[ 4621], 00:09:32.578 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 8225], 99.95th=[ 8979], 00:09:32.578 | 99.99th=[10028] 00:09:32.578 bw ( KiB/s): min=95528, max=100120, per=100.00%, avg=97813.33, stdev=2296.07, samples=3 00:09:32.578 iops : min=23882, max=25030, avg=24453.33, stdev=574.02, samples=3 00:09:32.578 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.07% 00:09:32.578 lat (msec) : 2=3.51%, 4=89.97%, 10=6.39%, 20=0.01% 00:09:32.578 cpu : usr=98.75%, sys=0.25%, ctx=31, majf=0, minf=624 00:09:32.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:32.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.578 issued rwts: total=48577,48277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.578 00:09:32.578 Run status group 0 (all jobs): 00:09:32.578 READ: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:32.578 WRITE: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:32.578 ----------------------------------------------------- 00:09:32.578 Suppressions used: 00:09:32.578 count bytes template 00:09:32.578 1 32 /usr/src/fio/parse.c 00:09:32.578 1 8 libtcmalloc_minimal.so 00:09:32.578 ----------------------------------------------------- 00:09:32.578 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:32.578 03:18:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:32.578 03:18:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.578 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:32.578 fio-3.35 00:09:32.578 Starting 1 thread 00:09:40.746 00:09:40.746 test: (groupid=0, jobs=1): err= 0: pid=77556: Thu Nov 21 03:18:26 2024 00:09:40.746 read: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec) 00:09:40.746 slat (nsec): min=3378, max=62540, avg=5196.09, stdev=2502.41 00:09:40.746 clat (usec): min=237, max=10983, avg=2848.64, stdev=1005.37 00:09:40.746 lat (usec): min=242, max=11004, avg=2853.84, stdev=1006.72 00:09:40.746 clat percentiles (usec): 00:09:40.746 | 1.00th=[ 1598], 5.00th=[ 2040], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:40.746 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:40.746 | 70.00th=[ 2704], 80.00th=[ 3130], 90.00th=[ 4293], 95.00th=[ 5342], 00:09:40.746 | 99.00th=[ 6456], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 8717], 00:09:40.746 | 99.99th=[10814] 00:09:40.746 bw ( KiB/s): min=90536, max=92024, per=100.00%, avg=91352.00, stdev=754.38, samples=3 00:09:40.746 iops : min=22634, max=23006, avg=22838.00, stdev=188.59, samples=3 00:09:40.746 write: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(175MiB/2001msec); 0 zone resets 00:09:40.746 slat (nsec): min=3455, max=80374, avg=5376.61, stdev=2408.02 00:09:40.746 clat (usec): min=260, max=10839, avg=2850.51, stdev=1012.79 00:09:40.746 lat (usec): min=265, max=10847, avg=2855.88, stdev=1014.06 00:09:40.746 clat percentiles (usec): 00:09:40.746 | 1.00th=[ 1598], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:40.746 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:40.746 | 70.00th=[ 2704], 80.00th=[ 3097], 90.00th=[ 4293], 95.00th=[ 5342], 00:09:40.746 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8979], 00:09:40.746 | 99.99th=[10421] 00:09:40.746 bw ( KiB/s): min=90048, max=93584, per=100.00%, avg=91501.33, stdev=1850.10, samples=3 00:09:40.746 iops : min=22512, max=23396, avg=22875.33, stdev=462.52, samples=3 00:09:40.746 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.06% 00:09:40.746 lat (msec) : 2=3.79%, 4=84.45%, 10=11.65%, 20=0.02% 00:09:40.746 cpu : usr=99.20%, sys=0.00%, ctx=5, majf=0, minf=625 00:09:40.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:40.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.746 issued rwts: total=44949,44673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.746 00:09:40.746 Run status group 0 (all jobs): 00:09:40.746 READ: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:40.746 WRITE: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:40.746 ----------------------------------------------------- 00:09:40.746 Suppressions used: 00:09:40.746 count bytes template 00:09:40.746 1 32 /usr/src/fio/parse.c 00:09:40.746 1 8 libtcmalloc_minimal.so 00:09:40.746 ----------------------------------------------------- 00:09:40.746 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:40.746 03:18:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:40.746 03:18:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.746 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:40.746 fio-3.35 00:09:40.746 Starting 1 thread 00:09:48.914 00:09:48.914 test: (groupid=0, jobs=1): err= 0: pid=77618: Thu Nov 21 03:18:35 2024 00:09:48.914 read: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec) 00:09:48.914 slat (usec): min=3, max=106, avg= 5.15, stdev= 2.34 00:09:48.914 clat (usec): min=175, max=10118, avg=2857.67, stdev=905.49 00:09:48.914 lat (usec): min=178, max=10168, avg=2862.82, stdev=906.75 00:09:48.914 clat percentiles (usec): 00:09:48.914 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:48.914 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:09:48.914 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 3949], 95.00th=[ 5145], 00:09:48.914 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 8455], 00:09:48.914 | 99.99th=[ 9896] 00:09:48.914 bw ( KiB/s): min=84720, max=96287, per=100.00%, avg=90487.67, stdev=5783.57, samples=3 00:09:48.914 iops : min=21180, max=24071, avg=22621.67, stdev=1445.52, samples=3 00:09:48.914 write: IOPS=22.2k, BW=86.9MiB/s (91.1MB/s)(174MiB/2001msec); 0 zone resets 00:09:48.914 slat (nsec): min=3502, max=58764, avg=5374.36, stdev=2329.18 00:09:48.914 clat (usec): min=203, max=9992, avg=2858.19, stdev=903.30 00:09:48.914 lat (usec): min=207, max=10011, avg=2863.56, stdev=904.59 00:09:48.914 clat percentiles (usec): 00:09:48.914 | 1.00th=[ 1893], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:48.914 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2606], 00:09:48.914 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 3982], 95.00th=[ 5080], 00:09:48.914 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7963], 99.95th=[ 8586], 00:09:48.914 | 99.99th=[ 9765] 00:09:48.914 bw ( KiB/s): min=83928, max=96063, per=100.00%, avg=90703.67, stdev=6190.24, samples=3 00:09:48.914 iops : min=20982, max=24015, avg=22675.67, stdev=1547.24, samples=3 00:09:48.914 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:48.914 lat (msec) : 2=1.30%, 4=88.84%, 10=9.80%, 20=0.01% 00:09:48.914 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=625 00:09:48.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:48.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.914 issued rwts: total=44804,44501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.914 00:09:48.914 Run status group 0 (all jobs): 00:09:48.914 READ: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:09:48.914 WRITE: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=174MiB (182MB), run=2001-2001msec 00:09:48.914 ----------------------------------------------------- 00:09:48.914 Suppressions used: 00:09:48.914 count bytes template 00:09:48.914 1 32 /usr/src/fio/parse.c 00:09:48.914 1 8 libtcmalloc_minimal.so 00:09:48.914 ----------------------------------------------------- 00:09:48.914 00:09:48.914 03:18:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:48.914 03:18:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:48.914 03:18:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:48.914 03:18:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:48.914 03:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:48.914 03:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:48.914 03:18:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:48.914 03:18:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:48.914 03:18:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:49.175 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:49.175 fio-3.35 00:09:49.175 Starting 1 thread 00:09:57.317 00:09:57.317 test: (groupid=0, jobs=1): err= 0: pid=77680: Thu Nov 21 03:18:44 2024 00:09:57.317 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec) 00:09:57.317 slat (nsec): min=4214, max=45708, avg=4907.05, stdev=1906.15 00:09:57.317 clat (usec): min=661, max=8771, avg=2686.91, stdev=792.68 00:09:57.317 lat (usec): min=698, max=8781, avg=2691.82, stdev=793.82 00:09:57.317 clat percentiles (usec): 00:09:57.317 | 1.00th=[ 1729], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:57.317 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:57.317 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3228], 95.00th=[ 4621], 00:09:57.317 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 7701], 00:09:57.317 | 99.99th=[ 8225] 00:09:57.317 bw ( KiB/s): min=87856, max=98496, per=98.52%, avg=93848.00, stdev=5445.84, samples=3 00:09:57.317 iops : min=21964, max=24624, avg=23462.00, stdev=1361.46, samples=3 00:09:57.317 write: IOPS=23.7k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets 00:09:57.317 slat (nsec): min=4282, max=54830, avg=5154.53, stdev=1921.32 00:09:57.317 clat (usec): min=790, max=8660, avg=2690.02, stdev=789.17 00:09:57.317 lat (usec): min=803, max=8667, avg=2695.17, stdev=790.25 00:09:57.317 clat percentiles (usec): 00:09:57.317 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:57.317 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:57.317 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3228], 95.00th=[ 4621], 00:09:57.317 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7832], 00:09:57.317 | 99.99th=[ 8455] 00:09:57.317 bw ( KiB/s): min=87424, max=97552, per=99.23%, avg=93906.67, stdev=5628.67, samples=3 00:09:57.317 iops : min=21856, max=24388, avg=23476.67, stdev=1407.17, samples=3 00:09:57.317 lat (usec) : 750=0.01%, 1000=0.01% 00:09:57.317 lat (msec) : 2=2.14%, 4=91.08%, 10=6.77% 00:09:57.317 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=625 00:09:57.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.317 issued rwts: total=47652,47340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.317 00:09:57.317 Run status group 0 (all jobs): 00:09:57.317 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:57.317 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:57.317 ----------------------------------------------------- 00:09:57.317 Suppressions used: 00:09:57.317 count bytes template 00:09:57.317 1 32 /usr/src/fio/parse.c 00:09:57.317 1 8 libtcmalloc_minimal.so 00:09:57.317 ----------------------------------------------------- 00:09:57.317 00:09:57.317 ************************************ 00:09:57.317 END TEST nvme_fio 00:09:57.317 ************************************ 00:09:57.317 03:18:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:57.317 03:18:44 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:57.317 00:09:57.317 real 0m33.433s 00:09:57.317 user 0m20.665s 00:09:57.317 sys 0m23.292s 00:09:57.317 03:18:44 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.317 03:18:44 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:57.317 ************************************ 00:09:57.317 END TEST nvme 00:09:57.317 ************************************ 00:09:57.317 00:09:57.317 real 1m42.889s 00:09:57.317 user 3m38.920s 00:09:57.317 sys 0m34.252s 00:09:57.317 03:18:44 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.317 03:18:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.317 03:18:44 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:57.317 03:18:44 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:57.317 03:18:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.317 03:18:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.317 03:18:44 -- common/autotest_common.sh@10 -- # set +x 00:09:57.317 ************************************ 00:09:57.317 START TEST nvme_scc 00:09:57.317 ************************************ 00:09:57.317 03:18:44 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:57.317 * Looking for test storage... 00:09:57.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.317 03:18:44 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.317 03:18:44 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.317 03:18:44 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.579 03:18:44 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.579 03:18:44 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:57.579 03:18:44 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.579 03:18:44 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.579 --rc genhtml_branch_coverage=1 00:09:57.579 --rc genhtml_function_coverage=1 00:09:57.579 --rc genhtml_legend=1 00:09:57.579 --rc geninfo_all_blocks=1 00:09:57.579 --rc geninfo_unexecuted_blocks=1 00:09:57.579 00:09:57.579 ' 00:09:57.579 03:18:44 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.579 --rc genhtml_branch_coverage=1 00:09:57.579 --rc genhtml_function_coverage=1 00:09:57.579 --rc genhtml_legend=1 00:09:57.579 --rc geninfo_all_blocks=1 00:09:57.579 --rc geninfo_unexecuted_blocks=1 00:09:57.579 00:09:57.579 ' 00:09:57.579 03:18:44 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.579 --rc genhtml_branch_coverage=1 00:09:57.579 --rc genhtml_function_coverage=1 00:09:57.579 --rc genhtml_legend=1 00:09:57.579 --rc geninfo_all_blocks=1 00:09:57.579 --rc geninfo_unexecuted_blocks=1 00:09:57.579 00:09:57.580 ' 00:09:57.580 03:18:44 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.580 --rc genhtml_branch_coverage=1 00:09:57.580 --rc genhtml_function_coverage=1 00:09:57.580 --rc genhtml_legend=1 00:09:57.580 --rc geninfo_all_blocks=1 00:09:57.580 --rc geninfo_unexecuted_blocks=1 00:09:57.580 00:09:57.580 ' 00:09:57.580 03:18:44 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.580 03:18:44 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.580 03:18:44 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.580 03:18:44 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.580 03:18:44 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.580 03:18:44 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.580 03:18:44 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.580 03:18:44 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.580 03:18:44 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:57.580 03:18:44 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:57.580 03:18:44 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:57.580 03:18:44 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.580 03:18:44 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:57.580 03:18:44 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:57.580 03:18:44 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:57.580 03:18:44 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:57.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.841 Waiting for block devices as requested 00:09:58.102 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.102 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.102 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.102 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.444 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.444 03:18:50 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:03.444 03:18:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:03.444 03:18:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:03.444 03:18:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.444 03:18:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:03.444 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.445 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:03.446 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.447 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.448 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.449 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.450 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.451 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:03.452 03:18:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:03.452 03:18:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:03.452 03:18:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.452 03:18:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:03.452 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.453 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.454 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.455 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:03.456 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.457 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.458 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.459 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:03.460 03:18:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:03.460 03:18:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:03.460 03:18:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.460 03:18:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:03.460 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.461 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.462 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.463 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.464 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.465 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.466 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.467 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.468 03:18:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.735 03:18:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:03.735 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:03.736 03:18:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:03.738 03:18:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:03.738 03:18:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:03.738 03:18:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.738 03:18:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:03.740 03:18:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:03.740 03:18:51 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:03.740 03:18:51 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:04.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:04.879 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.879 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.879 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.879 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.879 03:18:52 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:04.879 03:18:52 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:04.879 03:18:52 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.879 03:18:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:04.879 ************************************ 00:10:04.879 START TEST nvme_simple_copy 00:10:04.879 ************************************ 00:10:04.880 03:18:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:05.138 Initializing NVMe Controllers 00:10:05.138 Attaching to 0000:00:10.0 00:10:05.138 Controller supports SCC. Attached to 0000:00:10.0 00:10:05.138 Namespace ID: 1 size: 6GB 00:10:05.138 Initialization complete. 00:10:05.138 00:10:05.138 Controller QEMU NVMe Ctrl (12340 ) 00:10:05.138 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:05.138 Namespace Block Size:4096 00:10:05.138 Writing LBAs 0 to 63 with Random Data 00:10:05.138 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:05.138 LBAs matching Written Data: 64 00:10:05.138 00:10:05.138 real 0m0.249s 00:10:05.138 user 0m0.084s 00:10:05.138 sys 0m0.064s 00:10:05.138 03:18:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.138 ************************************ 00:10:05.138 03:18:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:05.138 END TEST nvme_simple_copy 00:10:05.138 ************************************ 00:10:05.138 ************************************ 00:10:05.138 END TEST nvme_scc 00:10:05.138 ************************************ 00:10:05.138 00:10:05.138 real 0m7.876s 00:10:05.138 user 0m1.155s 00:10:05.138 sys 0m1.445s 00:10:05.138 03:18:52 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.138 03:18:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:05.138 03:18:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:05.138 03:18:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:05.138 03:18:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:05.138 03:18:52 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:05.138 03:18:52 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:05.138 03:18:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.138 03:18:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.138 03:18:52 -- common/autotest_common.sh@10 -- # set +x 00:10:05.397 ************************************ 00:10:05.397 START TEST nvme_fdp 00:10:05.397 ************************************ 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:05.397 * Looking for test storage... 00:10:05.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.397 --rc genhtml_branch_coverage=1 00:10:05.397 --rc genhtml_function_coverage=1 00:10:05.397 --rc genhtml_legend=1 00:10:05.397 --rc geninfo_all_blocks=1 00:10:05.397 --rc geninfo_unexecuted_blocks=1 00:10:05.397 00:10:05.397 ' 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.397 --rc genhtml_branch_coverage=1 00:10:05.397 --rc genhtml_function_coverage=1 00:10:05.397 --rc genhtml_legend=1 00:10:05.397 --rc geninfo_all_blocks=1 00:10:05.397 --rc geninfo_unexecuted_blocks=1 00:10:05.397 00:10:05.397 ' 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.397 --rc genhtml_branch_coverage=1 00:10:05.397 --rc genhtml_function_coverage=1 00:10:05.397 --rc genhtml_legend=1 00:10:05.397 --rc geninfo_all_blocks=1 00:10:05.397 --rc geninfo_unexecuted_blocks=1 00:10:05.397 00:10:05.397 ' 00:10:05.397 03:18:52 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.397 --rc genhtml_branch_coverage=1 00:10:05.397 --rc genhtml_function_coverage=1 00:10:05.397 --rc genhtml_legend=1 00:10:05.397 --rc geninfo_all_blocks=1 00:10:05.397 --rc geninfo_unexecuted_blocks=1 00:10:05.397 00:10:05.397 ' 00:10:05.397 03:18:52 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.397 03:18:52 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.397 03:18:52 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.397 03:18:52 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.397 03:18:52 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.397 03:18:52 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:05.397 03:18:52 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:05.397 03:18:52 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:05.397 03:18:52 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.397 03:18:52 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:05.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.914 Waiting for block devices as requested 00:10:05.914 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.914 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.914 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.173 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.463 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:11.463 03:18:58 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:11.463 03:18:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:11.463 03:18:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:11.463 03:18:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.463 03:18:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:11.463 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:11.464 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:11.465 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.466 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.467 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:11.468 03:18:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:11.468 03:18:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:11.468 03:18:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.468 03:18:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.468 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.469 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.470 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.471 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:11.472 03:18:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:11.472 03:18:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:11.472 03:18:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.472 03:18:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.472 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.473 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:11.474 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.475 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.476 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:11.477 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:11.478 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:11.479 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:11.480 03:18:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:11.480 03:18:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:11.480 03:18:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.480 03:18:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.480 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:11.481 03:18:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.481 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:11.744 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:11.745 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:11.746 03:18:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:11.746 03:18:59 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:11.746 03:18:59 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:12.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.578 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.578 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.837 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.837 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.837 03:19:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:12.837 03:19:00 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.837 03:19:00 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.837 03:19:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:12.837 ************************************ 00:10:12.837 START TEST nvme_flexible_data_placement 00:10:12.837 ************************************ 00:10:12.837 03:19:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:13.095 Initializing NVMe Controllers 00:10:13.095 Attaching to 0000:00:13.0 00:10:13.095 Controller supports FDP Attached to 0000:00:13.0 00:10:13.095 Namespace ID: 1 Endurance Group ID: 1 00:10:13.095 Initialization complete. 00:10:13.095 00:10:13.095 ================================== 00:10:13.095 == FDP tests for Namespace: #01 == 00:10:13.095 ================================== 00:10:13.095 00:10:13.095 Get Feature: FDP: 00:10:13.095 ================= 00:10:13.095 Enabled: Yes 00:10:13.095 FDP configuration Index: 0 00:10:13.095 00:10:13.095 FDP configurations log page 00:10:13.095 =========================== 00:10:13.095 Number of FDP configurations: 1 00:10:13.095 Version: 0 00:10:13.095 Size: 112 00:10:13.095 FDP Configuration Descriptor: 0 00:10:13.095 Descriptor Size: 96 00:10:13.095 Reclaim Group Identifier format: 2 00:10:13.095 FDP Volatile Write Cache: Not Present 00:10:13.095 FDP Configuration: Valid 00:10:13.095 Vendor Specific Size: 0 00:10:13.095 Number of Reclaim Groups: 2 00:10:13.095 Number of Recalim Unit Handles: 8 00:10:13.095 Max Placement Identifiers: 128 00:10:13.095 Number of Namespaces Suppprted: 256 00:10:13.095 Reclaim unit Nominal Size: 6000000 bytes 00:10:13.095 Estimated Reclaim Unit Time Limit: Not Reported 00:10:13.095 RUH Desc #000: RUH Type: Initially Isolated 00:10:13.095 RUH Desc #001: RUH Type: Initially Isolated 00:10:13.095 RUH Desc #002: RUH Type: Initially Isolated 00:10:13.095 RUH Desc #003: RUH Type: Initially Isolated 00:10:13.095 RUH Desc #004: RUH Type: Initially Isolated 00:10:13.095 RUH Desc #005: RUH Type: Initially Isolated 00:10:13.096 RUH Desc #006: RUH Type: Initially Isolated 00:10:13.096 RUH Desc #007: RUH Type: Initially Isolated 00:10:13.096 00:10:13.096 FDP reclaim unit handle usage log page 00:10:13.096 ====================================== 00:10:13.096 Number of Reclaim Unit Handles: 8 00:10:13.096 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:13.096 RUH Usage Desc #001: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #002: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #003: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #004: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #005: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #006: RUH Attributes: Unused 00:10:13.096 RUH Usage Desc #007: RUH Attributes: Unused 00:10:13.096 00:10:13.096 FDP statistics log page 00:10:13.096 ======================= 00:10:13.096 Host bytes with metadata written: 1604677632 00:10:13.096 Media bytes with metadata written: 1605517312 00:10:13.096 Media bytes erased: 0 00:10:13.096 00:10:13.096 FDP Reclaim unit handle status 00:10:13.096 ============================== 00:10:13.096 Number of RUHS descriptors: 2 00:10:13.096 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000005a9 00:10:13.096 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:13.096 00:10:13.096 FDP write on placement id: 0 success 00:10:13.096 00:10:13.096 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:13.096 00:10:13.096 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:13.096 00:10:13.096 Get Feature: FDP Events for Placement handle: #0 00:10:13.096 ======================== 00:10:13.096 Number of FDP Events: 6 00:10:13.096 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:13.096 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:13.096 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:13.096 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:13.096 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:13.096 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:13.096 00:10:13.096 FDP events log page 00:10:13.096 =================== 00:10:13.096 Number of FDP events: 1 00:10:13.096 FDP Event #0: 00:10:13.096 Event Type: RU Not Written to Capacity 00:10:13.096 Placement Identifier: Valid 00:10:13.096 NSID: Valid 00:10:13.096 Location: Valid 00:10:13.096 Placement Identifier: 0 00:10:13.096 Event Timestamp: 3 00:10:13.096 Namespace Identifier: 1 00:10:13.096 Reclaim Group Identifier: 0 00:10:13.096 Reclaim Unit Handle Identifier: 0 00:10:13.096 00:10:13.096 FDP test passed 00:10:13.096 00:10:13.096 real 0m0.235s 00:10:13.096 user 0m0.066s 00:10:13.096 sys 0m0.067s 00:10:13.096 03:19:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.096 ************************************ 00:10:13.096 END TEST nvme_flexible_data_placement 00:10:13.096 ************************************ 00:10:13.096 03:19:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:13.096 ************************************ 00:10:13.096 END TEST nvme_fdp 00:10:13.096 ************************************ 00:10:13.096 00:10:13.096 real 0m7.822s 00:10:13.096 user 0m1.120s 00:10:13.096 sys 0m1.453s 00:10:13.096 03:19:00 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.096 03:19:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:13.096 03:19:00 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:13.096 03:19:00 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:13.096 03:19:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.096 03:19:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.096 03:19:00 -- common/autotest_common.sh@10 -- # set +x 00:10:13.096 ************************************ 00:10:13.096 START TEST nvme_rpc 00:10:13.096 ************************************ 00:10:13.096 03:19:00 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:13.355 * Looking for test storage... 00:10:13.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.355 03:19:00 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.355 --rc genhtml_branch_coverage=1 00:10:13.355 --rc genhtml_function_coverage=1 00:10:13.355 --rc genhtml_legend=1 00:10:13.355 --rc geninfo_all_blocks=1 00:10:13.355 --rc geninfo_unexecuted_blocks=1 00:10:13.355 00:10:13.355 ' 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.355 --rc genhtml_branch_coverage=1 00:10:13.355 --rc genhtml_function_coverage=1 00:10:13.355 --rc genhtml_legend=1 00:10:13.355 --rc geninfo_all_blocks=1 00:10:13.355 --rc geninfo_unexecuted_blocks=1 00:10:13.355 00:10:13.355 ' 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.355 --rc genhtml_branch_coverage=1 00:10:13.355 --rc genhtml_function_coverage=1 00:10:13.355 --rc genhtml_legend=1 00:10:13.355 --rc geninfo_all_blocks=1 00:10:13.355 --rc geninfo_unexecuted_blocks=1 00:10:13.355 00:10:13.355 ' 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.355 --rc genhtml_branch_coverage=1 00:10:13.355 --rc genhtml_function_coverage=1 00:10:13.355 --rc genhtml_legend=1 00:10:13.355 --rc geninfo_all_blocks=1 00:10:13.355 --rc geninfo_unexecuted_blocks=1 00:10:13.355 00:10:13.355 ' 00:10:13.355 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.355 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:13.355 03:19:00 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:13.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.356 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:13.356 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:13.356 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79060 00:10:13.356 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:13.356 03:19:00 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79060 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79060 ']' 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.356 03:19:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.356 [2024-11-21 03:19:00.886671] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:13.356 [2024-11-21 03:19:00.886921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79060 ] 00:10:13.614 [2024-11-21 03:19:01.019980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:13.614 [2024-11-21 03:19:01.046924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.614 [2024-11-21 03:19:01.067457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.614 [2024-11-21 03:19:01.067492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.180 03:19:01 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.180 03:19:01 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:14.180 03:19:01 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:14.438 Nvme0n1 00:10:14.438 03:19:01 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:14.439 03:19:01 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:14.697 request: 00:10:14.697 { 00:10:14.697 "bdev_name": "Nvme0n1", 00:10:14.697 "filename": "non_existing_file", 00:10:14.697 "method": "bdev_nvme_apply_firmware", 00:10:14.697 "req_id": 1 00:10:14.697 } 00:10:14.697 Got JSON-RPC error response 00:10:14.697 response: 00:10:14.697 { 00:10:14.697 "code": -32603, 00:10:14.697 "message": "open file failed." 00:10:14.697 } 00:10:14.697 03:19:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:14.697 03:19:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:14.697 03:19:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:14.955 03:19:02 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:14.955 03:19:02 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79060 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79060 ']' 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79060 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79060 00:10:14.955 killing process with pid 79060 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79060' 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79060 00:10:14.955 03:19:02 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79060 00:10:15.213 ************************************ 00:10:15.213 END TEST nvme_rpc 00:10:15.213 ************************************ 00:10:15.213 00:10:15.213 real 0m2.081s 00:10:15.213 user 0m4.034s 00:10:15.213 sys 0m0.483s 00:10:15.213 03:19:02 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.213 03:19:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.213 03:19:02 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:15.213 03:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.213 03:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.213 03:19:02 -- common/autotest_common.sh@10 -- # set +x 00:10:15.213 ************************************ 00:10:15.213 START TEST nvme_rpc_timeouts 00:10:15.213 ************************************ 00:10:15.213 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:15.471 * Looking for test storage... 00:10:15.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:15.471 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.471 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.471 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.471 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.471 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.472 03:19:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.472 --rc genhtml_branch_coverage=1 00:10:15.472 --rc genhtml_function_coverage=1 00:10:15.472 --rc genhtml_legend=1 00:10:15.472 --rc geninfo_all_blocks=1 00:10:15.472 --rc geninfo_unexecuted_blocks=1 00:10:15.472 00:10:15.472 ' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.472 --rc genhtml_branch_coverage=1 00:10:15.472 --rc genhtml_function_coverage=1 00:10:15.472 --rc genhtml_legend=1 00:10:15.472 --rc geninfo_all_blocks=1 00:10:15.472 --rc geninfo_unexecuted_blocks=1 00:10:15.472 00:10:15.472 ' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.472 --rc genhtml_branch_coverage=1 00:10:15.472 --rc genhtml_function_coverage=1 00:10:15.472 --rc genhtml_legend=1 00:10:15.472 --rc geninfo_all_blocks=1 00:10:15.472 --rc geninfo_unexecuted_blocks=1 00:10:15.472 00:10:15.472 ' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.472 --rc genhtml_branch_coverage=1 00:10:15.472 --rc genhtml_function_coverage=1 00:10:15.472 --rc genhtml_legend=1 00:10:15.472 --rc geninfo_all_blocks=1 00:10:15.472 --rc geninfo_unexecuted_blocks=1 00:10:15.472 00:10:15.472 ' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79114 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79114 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79146 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79146 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79146 ']' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.472 03:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:15.472 03:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:15.472 [2024-11-21 03:19:02.947379] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:15.472 [2024-11-21 03:19:02.947495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79146 ] 00:10:15.730 [2024-11-21 03:19:03.081148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:15.730 [2024-11-21 03:19:03.108167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:15.730 [2024-11-21 03:19:03.128756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.730 [2024-11-21 03:19:03.128792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.303 03:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.303 03:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:16.303 Checking default timeout settings: 00:10:16.303 03:19:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:16.303 03:19:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:16.565 Making settings changes with rpc: 00:10:16.565 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:16.565 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:16.826 Check default vs. modified settings: 00:10:16.826 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:16.826 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79114 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79114 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:17.087 Setting action_on_timeout is changed as expected. 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:17.087 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:17.088 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:17.088 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79114 00:10:17.088 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79114 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:17.348 Setting timeout_us is changed as expected. 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79114 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79114 00:10:17.348 Setting timeout_admin_us is changed as expected. 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79114 /tmp/settings_modified_79114 00:10:17.348 03:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79146 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79146 ']' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79146 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79146 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.348 killing process with pid 79146 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79146' 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79146 00:10:17.348 03:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79146 00:10:17.606 RPC TIMEOUT SETTING TEST PASSED. 00:10:17.606 03:19:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:17.606 00:10:17.606 real 0m2.277s 00:10:17.606 user 0m4.574s 00:10:17.606 sys 0m0.472s 00:10:17.607 ************************************ 00:10:17.607 END TEST nvme_rpc_timeouts 00:10:17.607 ************************************ 00:10:17.607 03:19:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.607 03:19:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:17.607 03:19:05 -- spdk/autotest.sh@239 -- # uname -s 00:10:17.607 03:19:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:17.607 03:19:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:17.607 03:19:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.607 03:19:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.607 03:19:05 -- common/autotest_common.sh@10 -- # set +x 00:10:17.607 ************************************ 00:10:17.607 START TEST sw_hotplug 00:10:17.607 ************************************ 00:10:17.607 03:19:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:17.607 * Looking for test storage... 00:10:17.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.607 03:19:05 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.607 03:19:05 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.607 03:19:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.866 03:19:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.866 --rc genhtml_branch_coverage=1 00:10:17.866 --rc genhtml_function_coverage=1 00:10:17.866 --rc genhtml_legend=1 00:10:17.866 --rc geninfo_all_blocks=1 00:10:17.866 --rc geninfo_unexecuted_blocks=1 00:10:17.866 00:10:17.866 ' 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.866 --rc genhtml_branch_coverage=1 00:10:17.866 --rc genhtml_function_coverage=1 00:10:17.866 --rc genhtml_legend=1 00:10:17.866 --rc geninfo_all_blocks=1 00:10:17.866 --rc geninfo_unexecuted_blocks=1 00:10:17.866 00:10:17.866 ' 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.866 --rc genhtml_branch_coverage=1 00:10:17.866 --rc genhtml_function_coverage=1 00:10:17.866 --rc genhtml_legend=1 00:10:17.866 --rc geninfo_all_blocks=1 00:10:17.866 --rc geninfo_unexecuted_blocks=1 00:10:17.866 00:10:17.866 ' 00:10:17.866 03:19:05 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.866 --rc genhtml_branch_coverage=1 00:10:17.866 --rc genhtml_function_coverage=1 00:10:17.866 --rc genhtml_legend=1 00:10:17.866 --rc geninfo_all_blocks=1 00:10:17.866 --rc geninfo_unexecuted_blocks=1 00:10:17.866 00:10:17.866 ' 00:10:17.866 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:18.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.125 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:18.125 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:18.125 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:18.125 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:18.125 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:18.125 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:18.125 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:18.125 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:18.125 03:19:05 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.126 03:19:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:18.385 03:19:05 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:18.385 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:18.385 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:18.385 03:19:05 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.644 Waiting for block devices as requested 00:10:18.644 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.902 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.902 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.902 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.164 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:24.164 03:19:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:24.164 03:19:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.422 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:24.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.422 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:24.680 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:24.938 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79993 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:24.938 03:19:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:24.938 03:19:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:25.197 Initializing NVMe Controllers 00:10:25.197 Attaching to 0000:00:10.0 00:10:25.197 Attaching to 0000:00:11.0 00:10:25.197 Attached to 0000:00:10.0 00:10:25.197 Attached to 0000:00:11.0 00:10:25.197 Initialization complete. Starting I/O... 00:10:25.197 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:25.197 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:25.197 00:10:26.569 QEMU NVMe Ctrl (12340 ): 3280 I/Os completed (+3280) 00:10:26.569 QEMU NVMe Ctrl (12341 ): 3554 I/Os completed (+3554) 00:10:26.569 00:10:27.136 QEMU NVMe Ctrl (12340 ): 7082 I/Os completed (+3802) 00:10:27.136 QEMU NVMe Ctrl (12341 ): 7245 I/Os completed (+3691) 00:10:27.136 00:10:28.519 QEMU NVMe Ctrl (12340 ): 10815 I/Os completed (+3733) 00:10:28.519 QEMU NVMe Ctrl (12341 ): 11009 I/Os completed (+3764) 00:10:28.519 00:10:29.453 QEMU NVMe Ctrl (12340 ): 14920 I/Os completed (+4105) 00:10:29.453 QEMU NVMe Ctrl (12341 ): 15404 I/Os completed (+4395) 00:10:29.453 00:10:30.386 QEMU NVMe Ctrl (12340 ): 19013 I/Os completed (+4093) 00:10:30.386 QEMU NVMe Ctrl (12341 ): 19207 I/Os completed (+3803) 00:10:30.386 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.981 [2024-11-21 03:19:18.497352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:30.981 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:30.981 [2024-11-21 03:19:18.498246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.498284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.498298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.498308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.981 [2024-11-21 03:19:18.499370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.499409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.499427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.499442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.981 [2024-11-21 03:19:18.516126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:30.981 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:30.981 [2024-11-21 03:19:18.517142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.517241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.517319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.517357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.981 [2024-11-21 03:19:18.518500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.518548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.518581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 [2024-11-21 03:19:18.518651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:30.981 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:31.244 Attaching to 0000:00:10.0 00:10:31.244 Attached to 0000:00:10.0 00:10:31.244 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:31.244 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.244 03:19:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:31.244 Attaching to 0000:00:11.0 00:10:31.244 Attached to 0000:00:11.0 00:10:32.175 QEMU NVMe Ctrl (12340 ): 4363 I/Os completed (+4363) 00:10:32.175 QEMU NVMe Ctrl (12341 ): 4078 I/Os completed (+4078) 00:10:32.175 00:10:33.563 QEMU NVMe Ctrl (12340 ): 8213 I/Os completed (+3850) 00:10:33.563 QEMU NVMe Ctrl (12341 ): 7844 I/Os completed (+3766) 00:10:33.563 00:10:34.129 QEMU NVMe Ctrl (12340 ): 11857 I/Os completed (+3644) 00:10:34.129 QEMU NVMe Ctrl (12341 ): 11478 I/Os completed (+3634) 00:10:34.129 00:10:35.504 QEMU NVMe Ctrl (12340 ): 15591 I/Os completed (+3734) 00:10:35.504 QEMU NVMe Ctrl (12341 ): 15273 I/Os completed (+3795) 00:10:35.504 00:10:36.438 QEMU NVMe Ctrl (12340 ): 19288 I/Os completed (+3697) 00:10:36.438 QEMU NVMe Ctrl (12341 ): 18885 I/Os completed (+3612) 00:10:36.438 00:10:37.373 QEMU NVMe Ctrl (12340 ): 23357 I/Os completed (+4069) 00:10:37.374 QEMU NVMe Ctrl (12341 ): 23078 I/Os completed (+4193) 00:10:37.374 00:10:38.308 QEMU NVMe Ctrl (12340 ): 27668 I/Os completed (+4311) 00:10:38.308 QEMU NVMe Ctrl (12341 ): 27421 I/Os completed (+4343) 00:10:38.309 00:10:39.243 QEMU NVMe Ctrl (12340 ): 32041 I/Os completed (+4373) 00:10:39.243 QEMU NVMe Ctrl (12341 ): 31580 I/Os completed (+4159) 00:10:39.243 00:10:40.176 QEMU NVMe Ctrl (12340 ): 36083 I/Os completed (+4042) 00:10:40.176 QEMU NVMe Ctrl (12341 ): 35562 I/Os completed (+3982) 00:10:40.176 00:10:41.548 QEMU NVMe Ctrl (12340 ): 40136 I/Os completed (+4053) 00:10:41.548 QEMU NVMe Ctrl (12341 ): 39560 I/Os completed (+3998) 00:10:41.548 00:10:42.134 QEMU NVMe Ctrl (12340 ): 44519 I/Os completed (+4383) 00:10:42.134 QEMU NVMe Ctrl (12341 ): 44130 I/Os completed (+4570) 00:10:42.134 00:10:43.517 QEMU NVMe Ctrl (12340 ): 49118 I/Os completed (+4599) 00:10:43.517 QEMU NVMe Ctrl (12341 ): 48105 I/Os completed (+3975) 00:10:43.517 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.517 [2024-11-21 03:19:30.784466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:43.517 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:43.517 [2024-11-21 03:19:30.787616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.787772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.787812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.787886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:43.517 [2024-11-21 03:19:30.789532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.789687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.789710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 [2024-11-21 03:19:30.789724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.517 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.517 [2024-11-21 03:19:30.810055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:43.517 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:43.517 [2024-11-21 03:19:30.811008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.811048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.811063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.811080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:43.518 [2024-11-21 03:19:30.812268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.812311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.812324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 [2024-11-21 03:19:30.812338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:43.518 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:43.518 EAL: Scan for (pci) bus failed. 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.518 03:19:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:43.518 Attaching to 0000:00:10.0 00:10:43.518 Attached to 0000:00:10.0 00:10:43.518 03:19:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:43.518 03:19:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.518 03:19:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:43.518 Attaching to 0000:00:11.0 00:10:43.518 Attached to 0000:00:11.0 00:10:44.452 QEMU NVMe Ctrl (12340 ): 2959 I/Os completed (+2959) 00:10:44.452 QEMU NVMe Ctrl (12341 ): 2598 I/Os completed (+2598) 00:10:44.452 00:10:45.388 QEMU NVMe Ctrl (12340 ): 7143 I/Os completed (+4184) 00:10:45.388 QEMU NVMe Ctrl (12341 ): 6867 I/Os completed (+4269) 00:10:45.388 00:10:46.326 QEMU NVMe Ctrl (12340 ): 11553 I/Os completed (+4410) 00:10:46.326 QEMU NVMe Ctrl (12341 ): 11310 I/Os completed (+4443) 00:10:46.326 00:10:47.267 QEMU NVMe Ctrl (12340 ): 15321 I/Os completed (+3768) 00:10:47.267 QEMU NVMe Ctrl (12341 ): 15001 I/Os completed (+3691) 00:10:47.267 00:10:48.202 QEMU NVMe Ctrl (12340 ): 19672 I/Os completed (+4351) 00:10:48.202 QEMU NVMe Ctrl (12341 ): 19391 I/Os completed (+4390) 00:10:48.202 00:10:49.138 QEMU NVMe Ctrl (12340 ): 23863 I/Os completed (+4191) 00:10:49.138 QEMU NVMe Ctrl (12341 ): 23584 I/Os completed (+4193) 00:10:49.138 00:10:50.512 QEMU NVMe Ctrl (12340 ): 28034 I/Os completed (+4171) 00:10:50.512 QEMU NVMe Ctrl (12341 ): 27767 I/Os completed (+4183) 00:10:50.512 00:10:51.447 QEMU NVMe Ctrl (12340 ): 32214 I/Os completed (+4180) 00:10:51.447 QEMU NVMe Ctrl (12341 ): 31931 I/Os completed (+4164) 00:10:51.447 00:10:52.384 QEMU NVMe Ctrl (12340 ): 36390 I/Os completed (+4176) 00:10:52.384 QEMU NVMe Ctrl (12341 ): 36102 I/Os completed (+4171) 00:10:52.384 00:10:53.320 QEMU NVMe Ctrl (12340 ): 40550 I/Os completed (+4160) 00:10:53.320 QEMU NVMe Ctrl (12341 ): 40249 I/Os completed (+4147) 00:10:53.320 00:10:54.262 QEMU NVMe Ctrl (12340 ): 44807 I/Os completed (+4257) 00:10:54.262 QEMU NVMe Ctrl (12341 ): 44497 I/Os completed (+4248) 00:10:54.262 00:10:55.266 QEMU NVMe Ctrl (12340 ): 48902 I/Os completed (+4095) 00:10:55.266 QEMU NVMe Ctrl (12341 ): 48619 I/Os completed (+4122) 00:10:55.266 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.528 [2024-11-21 03:19:43.060582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:55.528 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:55.528 [2024-11-21 03:19:43.061972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.062199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.062291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.062373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:55.528 [2024-11-21 03:19:43.064132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.064228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.064261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.064290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:55.528 EAL: Scan for (pci) bus failed. 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.528 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.528 [2024-11-21 03:19:43.083591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:55.528 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:55.528 [2024-11-21 03:19:43.084745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.084912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.084949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.084981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:55.528 [2024-11-21 03:19:43.086340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.086434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.086466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.528 [2024-11-21 03:19:43.086496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.790 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.790 Attaching to 0000:00:10.0 00:10:55.790 Attached to 0000:00:10.0 00:10:56.052 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:56.052 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.052 03:19:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:56.052 Attaching to 0000:00:11.0 00:10:56.052 Attached to 0000:00:11.0 00:10:56.052 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:56.052 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:56.052 [2024-11-21 03:19:43.398572] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:08.288 03:19:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:08.288 03:19:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.289 03:19:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.89 00:11:08.289 03:19:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.89 00:11:08.289 03:19:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:08.289 03:19:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.89 00:11:08.289 03:19:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.89 2 00:11:08.289 remove_attach_helper took 42.89s to complete (handling 2 nvme drive(s)) 03:19:55 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79993 00:11:14.875 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79993) - No such process 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79993 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80542 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:14.875 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80542 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80542 ']' 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.876 03:20:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 03:20:01 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:14.876 [2024-11-21 03:20:01.475544] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:14.876 [2024-11-21 03:20:01.475663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80542 ] 00:11:14.876 [2024-11-21 03:20:01.606849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:14.876 [2024-11-21 03:20:01.633673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.876 [2024-11-21 03:20:01.651647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:14.876 03:20:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:14.876 03:20:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.451 [2024-11-21 03:20:08.399881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:21.451 [2024-11-21 03:20:08.401079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.401119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.401153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.401161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.401171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.401178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.401188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.401195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.401204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.401211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.401219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.451 [2024-11-21 03:20:08.899877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:21.451 [2024-11-21 03:20:08.900995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.901029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.901042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.901054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.901063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.901070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.901079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.901085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.901097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 [2024-11-21 03:20:08.901103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.451 [2024-11-21 03:20:08.901111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.451 [2024-11-21 03:20:08.901118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.451 03:20:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:21.451 03:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.017 03:20:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.017 03:20:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.017 03:20:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:22.017 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.275 03:20:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:34.484 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.485 [2024-11-21 03:20:21.800267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:34.485 [2024-11-21 03:20:21.801727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.485 [2024-11-21 03:20:21.801794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.485 [2024-11-21 03:20:21.801831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.485 [2024-11-21 03:20:21.801864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.485 [2024-11-21 03:20:21.801881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.485 [2024-11-21 03:20:21.801922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.485 [2024-11-21 03:20:21.801948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.485 [2024-11-21 03:20:21.801968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.485 [2024-11-21 03:20:21.802084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.485 [2024-11-21 03:20:21.802115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.485 [2024-11-21 03:20:21.802170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.485 [2024-11-21 03:20:21.802202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.485 03:20:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.485 03:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.744 [2024-11-21 03:20:22.300266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:34.744 [2024-11-21 03:20:22.301411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.744 [2024-11-21 03:20:22.301444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.744 [2024-11-21 03:20:22.301457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.744 [2024-11-21 03:20:22.301467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.744 [2024-11-21 03:20:22.301476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.744 [2024-11-21 03:20:22.301484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.744 [2024-11-21 03:20:22.301494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.744 [2024-11-21 03:20:22.301501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.744 [2024-11-21 03:20:22.301510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.744 [2024-11-21 03:20:22.301516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.744 [2024-11-21 03:20:22.301524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.744 [2024-11-21 03:20:22.301531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.003 03:20:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.003 03:20:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.003 03:20:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.003 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.261 03:20:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.549 [2024-11-21 03:20:34.700670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:47.549 [2024-11-21 03:20:34.702189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:34.702300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:34.702374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:34.702409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:34.702429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:34.702455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:34.702479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:34.702498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:34.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:34.702612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:34.702628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:34.702658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.549 03:20:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:47.549 03:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.549 [2024-11-21 03:20:35.100672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:47.549 [2024-11-21 03:20:35.101701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:35.101733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:35.101745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:35.101756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:35.101767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:35.101774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:35.101784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:35.101791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:35.101799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.549 [2024-11-21 03:20:35.101805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.549 [2024-11-21 03:20:35.101812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.549 [2024-11-21 03:20:35.101819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.810 03:20:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.810 03:20:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.810 03:20:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.810 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.070 03:20:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.28 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.28 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.28 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.28 2 00:12:00.301 remove_attach_helper took 45.28s to complete (handling 2 nvme drive(s)) 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:00.301 03:20:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:00.301 03:20:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.892 03:20:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.892 03:20:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 03:20:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.892 03:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.892 [2024-11-21 03:20:53.712685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:06.892 [2024-11-21 03:20:53.713675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.892 [2024-11-21 03:20:53.713710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.892 [2024-11-21 03:20:53.713722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.892 [2024-11-21 03:20:53.713735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.892 [2024-11-21 03:20:53.713742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.892 [2024-11-21 03:20:53.713751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.892 [2024-11-21 03:20:53.713758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.892 [2024-11-21 03:20:53.713769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.892 [2024-11-21 03:20:53.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.892 [2024-11-21 03:20:53.713783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.892 [2024-11-21 03:20:53.713789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.892 [2024-11-21 03:20:53.713797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.892 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:06.892 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.892 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.892 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.893 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.893 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.893 03:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.893 03:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.893 03:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.893 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:06.893 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.893 [2024-11-21 03:20:54.412701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:06.893 [2024-11-21 03:20:54.413466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.893 [2024-11-21 03:20:54.413498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.893 [2024-11-21 03:20:54.413510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.893 [2024-11-21 03:20:54.413520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.893 [2024-11-21 03:20:54.413529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.893 [2024-11-21 03:20:54.413536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.893 [2024-11-21 03:20:54.413544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.893 [2024-11-21 03:20:54.413551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.893 [2024-11-21 03:20:54.413559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.893 [2024-11-21 03:20:54.413566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.893 [2024-11-21 03:20:54.413576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.893 [2024-11-21 03:20:54.413582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.465 03:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.465 03:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.465 03:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.465 03:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.727 03:20:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.727 03:20:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.727 03:20:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.014 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.014 03:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.014 03:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 03:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.015 [2024-11-21 03:21:07.113024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:20.015 [2024-11-21 03:21:07.114017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.114117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.114177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.114288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.114308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.114333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.114386] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.114408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.114453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.114478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.114495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.114550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.015 03:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 03:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 03:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:20.015 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:20.015 [2024-11-21 03:21:07.513019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:20.015 [2024-11-21 03:21:07.513758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.513786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.513798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.513808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.513817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.513824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.513832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.513838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.513847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.015 [2024-11-21 03:21:07.513853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.015 [2024-11-21 03:21:07.513860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.015 [2024-11-21 03:21:07.513867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.276 03:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.276 03:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.276 03:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:20.276 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.277 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.277 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.536 03:21:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.775 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.775 03:21:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.775 03:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.775 03:21:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.776 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:32.776 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.776 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.776 03:21:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.776 [2024-11-21 03:21:20.013228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:32.776 [2024-11-21 03:21:20.014207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.776 [2024-11-21 03:21:20.014299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.776 [2024-11-21 03:21:20.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.776 [2024-11-21 03:21:20.014920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.776 [2024-11-21 03:21:20.014943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.776 [2024-11-21 03:21:20.014994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.776 [2024-11-21 03:21:20.015022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.776 [2024-11-21 03:21:20.015040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.776 [2024-11-21 03:21:20.015067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.776 [2024-11-21 03:21:20.015094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.776 [2024-11-21 03:21:20.015111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.776 [2024-11-21 03:21:20.015136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.776 03:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.776 03:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.776 03:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:32.776 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:33.037 [2024-11-21 03:21:20.513230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:33.037 [2024-11-21 03:21:20.514105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.037 [2024-11-21 03:21:20.514209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.037 [2024-11-21 03:21:20.514303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.037 [2024-11-21 03:21:20.514372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.037 [2024-11-21 03:21:20.514393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.037 [2024-11-21 03:21:20.514420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.037 [2024-11-21 03:21:20.514447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.037 [2024-11-21 03:21:20.514586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.037 [2024-11-21 03:21:20.514614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.037 [2024-11-21 03:21:20.514638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.037 [2024-11-21 03:21:20.514657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.037 [2024-11-21 03:21:20.514680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.037 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.037 03:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.037 03:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.037 03:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.298 03:21:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.26 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.26 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.26 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.26 2 00:12:45.526 remove_attach_helper took 45.26s to complete (handling 2 nvme drive(s)) 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:45.526 03:21:32 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80542 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80542 ']' 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80542 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80542 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80542' 00:12:45.526 killing process with pid 80542 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80542 00:12:45.526 03:21:32 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80542 00:12:45.788 03:21:33 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:46.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:46.399 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:46.399 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:46.662 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:46.662 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:46.662 00:12:46.662 real 2m29.073s 00:12:46.662 user 1m50.218s 00:12:46.662 sys 0m17.697s 00:12:46.662 03:21:34 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.662 03:21:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.662 ************************************ 00:12:46.662 END TEST sw_hotplug 00:12:46.662 ************************************ 00:12:46.662 03:21:34 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:46.662 03:21:34 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:46.662 03:21:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.662 03:21:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.662 03:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:46.662 ************************************ 00:12:46.662 START TEST nvme_xnvme 00:12:46.662 ************************************ 00:12:46.662 03:21:34 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:46.926 * Looking for test storage... 00:12:46.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.926 03:21:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.926 --rc genhtml_branch_coverage=1 00:12:46.926 --rc genhtml_function_coverage=1 00:12:46.926 --rc genhtml_legend=1 00:12:46.926 --rc geninfo_all_blocks=1 00:12:46.926 --rc geninfo_unexecuted_blocks=1 00:12:46.926 00:12:46.926 ' 00:12:46.926 03:21:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.927 --rc genhtml_branch_coverage=1 00:12:46.927 --rc genhtml_function_coverage=1 00:12:46.927 --rc genhtml_legend=1 00:12:46.927 --rc geninfo_all_blocks=1 00:12:46.927 --rc geninfo_unexecuted_blocks=1 00:12:46.927 00:12:46.927 ' 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.927 --rc genhtml_branch_coverage=1 00:12:46.927 --rc genhtml_function_coverage=1 00:12:46.927 --rc genhtml_legend=1 00:12:46.927 --rc geninfo_all_blocks=1 00:12:46.927 --rc geninfo_unexecuted_blocks=1 00:12:46.927 00:12:46.927 ' 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.927 --rc genhtml_branch_coverage=1 00:12:46.927 --rc genhtml_function_coverage=1 00:12:46.927 --rc genhtml_legend=1 00:12:46.927 --rc geninfo_all_blocks=1 00:12:46.927 --rc geninfo_unexecuted_blocks=1 00:12:46.927 00:12:46.927 ' 00:12:46.927 03:21:34 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:46.927 03:21:34 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:46.927 03:21:34 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:46.927 03:21:34 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:46.927 03:21:34 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:46.927 03:21:34 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:46.927 03:21:34 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:46.927 03:21:34 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:46.928 #define SPDK_CONFIG_H 00:12:46.928 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:46.928 #define SPDK_CONFIG_APPS 1 00:12:46.928 #define SPDK_CONFIG_ARCH native 00:12:46.928 #define SPDK_CONFIG_ASAN 1 00:12:46.928 #undef SPDK_CONFIG_AVAHI 00:12:46.928 #undef SPDK_CONFIG_CET 00:12:46.928 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:46.928 #define SPDK_CONFIG_COVERAGE 1 00:12:46.928 #define SPDK_CONFIG_CROSS_PREFIX 00:12:46.928 #undef SPDK_CONFIG_CRYPTO 00:12:46.928 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:46.928 #undef SPDK_CONFIG_CUSTOMOCF 00:12:46.928 #undef SPDK_CONFIG_DAOS 00:12:46.928 #define SPDK_CONFIG_DAOS_DIR 00:12:46.928 #define SPDK_CONFIG_DEBUG 1 00:12:46.928 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:46.928 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:46.928 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:46.928 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:46.928 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:46.928 #undef SPDK_CONFIG_DPDK_UADK 00:12:46.928 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:46.928 #define SPDK_CONFIG_EXAMPLES 1 00:12:46.928 #undef SPDK_CONFIG_FC 00:12:46.928 #define SPDK_CONFIG_FC_PATH 00:12:46.928 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:46.928 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:46.928 #define SPDK_CONFIG_FSDEV 1 00:12:46.928 #undef SPDK_CONFIG_FUSE 00:12:46.928 #undef SPDK_CONFIG_FUZZER 00:12:46.928 #define SPDK_CONFIG_FUZZER_LIB 00:12:46.928 #undef SPDK_CONFIG_GOLANG 00:12:46.928 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:46.928 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:46.928 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:46.928 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:46.928 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:46.928 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:46.928 #undef SPDK_CONFIG_HAVE_LZ4 00:12:46.928 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:46.928 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:46.928 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:46.928 #define SPDK_CONFIG_IDXD 1 00:12:46.928 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:46.928 #undef SPDK_CONFIG_IPSEC_MB 00:12:46.928 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:46.928 #define SPDK_CONFIG_ISAL 1 00:12:46.928 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:46.928 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:46.928 #define SPDK_CONFIG_LIBDIR 00:12:46.928 #undef SPDK_CONFIG_LTO 00:12:46.928 #define SPDK_CONFIG_MAX_LCORES 128 00:12:46.928 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:46.928 #define SPDK_CONFIG_NVME_CUSE 1 00:12:46.928 #undef SPDK_CONFIG_OCF 00:12:46.928 #define SPDK_CONFIG_OCF_PATH 00:12:46.928 #define SPDK_CONFIG_OPENSSL_PATH 00:12:46.928 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:46.928 #define SPDK_CONFIG_PGO_DIR 00:12:46.928 #undef SPDK_CONFIG_PGO_USE 00:12:46.928 #define SPDK_CONFIG_PREFIX /usr/local 00:12:46.928 #undef SPDK_CONFIG_RAID5F 00:12:46.928 #undef SPDK_CONFIG_RBD 00:12:46.928 #define SPDK_CONFIG_RDMA 1 00:12:46.928 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:46.928 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:46.928 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:46.928 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:46.928 #define SPDK_CONFIG_SHARED 1 00:12:46.928 #undef SPDK_CONFIG_SMA 00:12:46.928 #define SPDK_CONFIG_TESTS 1 00:12:46.928 #undef SPDK_CONFIG_TSAN 00:12:46.928 #define SPDK_CONFIG_UBLK 1 00:12:46.928 #define SPDK_CONFIG_UBSAN 1 00:12:46.928 #undef SPDK_CONFIG_UNIT_TESTS 00:12:46.928 #undef SPDK_CONFIG_URING 00:12:46.928 #define SPDK_CONFIG_URING_PATH 00:12:46.928 #undef SPDK_CONFIG_URING_ZNS 00:12:46.928 #undef SPDK_CONFIG_USDT 00:12:46.928 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:46.928 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:46.928 #undef SPDK_CONFIG_VFIO_USER 00:12:46.928 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:46.928 #define SPDK_CONFIG_VHOST 1 00:12:46.928 #define SPDK_CONFIG_VIRTIO 1 00:12:46.928 #undef SPDK_CONFIG_VTUNE 00:12:46.928 #define SPDK_CONFIG_VTUNE_DIR 00:12:46.928 #define SPDK_CONFIG_WERROR 1 00:12:46.928 #define SPDK_CONFIG_WPDK_DIR 00:12:46.928 #define SPDK_CONFIG_XNVME 1 00:12:46.928 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:46.928 03:21:34 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.928 03:21:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.928 03:21:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.928 03:21:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.928 03:21:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.928 03:21:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.928 03:21:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.928 03:21:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.928 03:21:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:46.928 03:21:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:46.928 03:21:34 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:46.928 03:21:34 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@140 -- # : main 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:46.929 03:21:34 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81901 ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81901 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.8c9UFV 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.8c9UFV/tests/xnvme /tmp/spdk.8c9UFV 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13241376768 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6343909376 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13241376768 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6343909376 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265217024 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:46.930 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97333997568 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2368782336 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:46.931 * Looking for test storage... 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13241376768 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.931 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.192 03:21:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.192 03:21:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:47.193 03:21:34 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.193 03:21:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.193 --rc genhtml_branch_coverage=1 00:12:47.193 --rc genhtml_function_coverage=1 00:12:47.193 --rc genhtml_legend=1 00:12:47.193 --rc geninfo_all_blocks=1 00:12:47.193 --rc geninfo_unexecuted_blocks=1 00:12:47.193 00:12:47.193 ' 00:12:47.193 03:21:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.193 --rc genhtml_branch_coverage=1 00:12:47.193 --rc genhtml_function_coverage=1 00:12:47.193 --rc genhtml_legend=1 00:12:47.193 --rc geninfo_all_blocks=1 00:12:47.193 --rc geninfo_unexecuted_blocks=1 00:12:47.193 00:12:47.193 ' 00:12:47.193 03:21:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.193 --rc genhtml_branch_coverage=1 00:12:47.193 --rc genhtml_function_coverage=1 00:12:47.193 --rc genhtml_legend=1 00:12:47.193 --rc geninfo_all_blocks=1 00:12:47.193 --rc geninfo_unexecuted_blocks=1 00:12:47.193 00:12:47.193 ' 00:12:47.193 03:21:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.193 --rc genhtml_branch_coverage=1 00:12:47.193 --rc genhtml_function_coverage=1 00:12:47.193 --rc genhtml_legend=1 00:12:47.193 --rc geninfo_all_blocks=1 00:12:47.193 --rc geninfo_unexecuted_blocks=1 00:12:47.193 00:12:47.193 ' 00:12:47.193 03:21:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.193 03:21:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.193 03:21:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.193 03:21:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.193 03:21:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.193 03:21:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:47.193 03:21:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:47.193 03:21:34 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:47.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.714 Waiting for block devices as requested 00:12:47.714 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.714 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.714 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.975 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.263 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:53.263 03:21:40 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:53.523 03:21:40 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:53.523 03:21:40 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:53.523 03:21:41 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:53.523 03:21:41 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:53.523 03:21:41 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:53.523 03:21:41 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:53.523 03:21:41 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:53.787 No valid GPT data, bailing 00:12:53.787 03:21:41 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:53.787 03:21:41 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:53.787 03:21:41 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:53.787 03:21:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:53.787 03:21:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.787 03:21:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.787 03:21:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.787 ************************************ 00:12:53.787 START TEST xnvme_rpc 00:12:53.787 ************************************ 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82290 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82290 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82290 ']' 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.787 03:21:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:53.787 [2024-11-21 03:21:41.238028] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:53.787 [2024-11-21 03:21:41.238175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82290 ] 00:12:54.048 [2024-11-21 03:21:41.379308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:54.048 [2024-11-21 03:21:41.411674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.048 [2024-11-21 03:21:41.442011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 xnvme_bdev 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82290 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82290 ']' 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82290 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82290 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.881 killing process with pid 82290 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82290' 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82290 00:12:54.881 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82290 00:12:55.143 00:12:55.143 real 0m1.421s 00:12:55.143 user 0m1.508s 00:12:55.143 sys 0m0.380s 00:12:55.143 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.143 ************************************ 00:12:55.143 END TEST xnvme_rpc 00:12:55.143 ************************************ 00:12:55.143 03:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.143 03:21:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.143 03:21:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.143 03:21:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.143 03:21:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.143 ************************************ 00:12:55.143 START TEST xnvme_bdevperf 00:12:55.143 ************************************ 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.143 03:21:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.143 { 00:12:55.143 "subsystems": [ 00:12:55.143 { 00:12:55.143 "subsystem": "bdev", 00:12:55.143 "config": [ 00:12:55.143 { 00:12:55.143 "params": { 00:12:55.143 "io_mechanism": "libaio", 00:12:55.143 "conserve_cpu": false, 00:12:55.143 "filename": "/dev/nvme0n1", 00:12:55.143 "name": "xnvme_bdev" 00:12:55.143 }, 00:12:55.143 "method": "bdev_xnvme_create" 00:12:55.143 }, 00:12:55.143 { 00:12:55.143 "method": "bdev_wait_for_examine" 00:12:55.143 } 00:12:55.143 ] 00:12:55.143 } 00:12:55.143 ] 00:12:55.143 } 00:12:55.403 [2024-11-21 03:21:42.706735] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:55.403 [2024-11-21 03:21:42.707071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82342 ] 00:12:55.404 [2024-11-21 03:21:42.843103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:55.404 [2024-11-21 03:21:42.872336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.404 [2024-11-21 03:21:42.902012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.665 Running I/O for 5 seconds... 00:12:57.552 29740.00 IOPS, 116.17 MiB/s [2024-11-21T03:21:46.061Z] 28028.50 IOPS, 109.49 MiB/s [2024-11-21T03:21:47.450Z] 27241.67 IOPS, 106.41 MiB/s [2024-11-21T03:21:48.391Z] 27170.00 IOPS, 106.13 MiB/s 00:13:00.826 Latency(us) 00:13:00.826 [2024-11-21T03:21:48.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.826 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:00.826 xnvme_bdev : 5.00 27033.89 105.60 0.00 0.00 2362.38 450.56 8418.86 00:13:00.826 [2024-11-21T03:21:48.391Z] =================================================================================================================== 00:13:00.826 [2024-11-21T03:21:48.391Z] Total : 27033.89 105.60 0.00 0.00 2362.38 450.56 8418.86 00:13:00.826 03:21:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.826 03:21:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:00.826 03:21:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:00.826 03:21:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:00.826 03:21:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.826 { 00:13:00.826 "subsystems": [ 00:13:00.826 { 00:13:00.826 "subsystem": "bdev", 00:13:00.826 "config": [ 00:13:00.826 { 00:13:00.826 "params": { 00:13:00.826 "io_mechanism": "libaio", 00:13:00.826 "conserve_cpu": false, 00:13:00.826 "filename": "/dev/nvme0n1", 00:13:00.826 "name": "xnvme_bdev" 00:13:00.826 }, 00:13:00.826 "method": "bdev_xnvme_create" 00:13:00.826 }, 00:13:00.826 { 00:13:00.826 "method": "bdev_wait_for_examine" 00:13:00.826 } 00:13:00.826 ] 00:13:00.826 } 00:13:00.826 ] 00:13:00.826 } 00:13:00.826 [2024-11-21 03:21:48.297453] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:00.826 [2024-11-21 03:21:48.297773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82406 ] 00:13:01.087 [2024-11-21 03:21:48.434225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:01.088 [2024-11-21 03:21:48.464580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.088 [2024-11-21 03:21:48.495819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.088 Running I/O for 5 seconds... 00:13:03.419 34871.00 IOPS, 136.21 MiB/s [2024-11-21T03:21:51.928Z] 35617.50 IOPS, 139.13 MiB/s [2024-11-21T03:21:52.872Z] 34544.67 IOPS, 134.94 MiB/s [2024-11-21T03:21:53.831Z] 33704.00 IOPS, 131.66 MiB/s 00:13:06.266 Latency(us) 00:13:06.266 [2024-11-21T03:21:53.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.266 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:06.266 xnvme_bdev : 5.00 34119.48 133.28 0.00 0.00 1871.23 207.95 7763.50 00:13:06.266 [2024-11-21T03:21:53.831Z] =================================================================================================================== 00:13:06.266 [2024-11-21T03:21:53.831Z] Total : 34119.48 133.28 0.00 0.00 1871.23 207.95 7763.50 00:13:06.597 ************************************ 00:13:06.597 END TEST xnvme_bdevperf 00:13:06.597 ************************************ 00:13:06.597 00:13:06.597 real 0m11.192s 00:13:06.597 user 0m3.632s 00:13:06.597 sys 0m6.171s 00:13:06.597 03:21:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.597 03:21:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 03:21:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:06.597 03:21:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.597 03:21:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.597 03:21:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 ************************************ 00:13:06.597 START TEST xnvme_fio_plugin 00:13:06.597 ************************************ 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:06.597 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.598 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.598 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:06.598 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:06.598 03:21:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.598 { 00:13:06.598 "subsystems": [ 00:13:06.598 { 00:13:06.598 "subsystem": "bdev", 00:13:06.598 "config": [ 00:13:06.598 { 00:13:06.598 "params": { 00:13:06.598 "io_mechanism": "libaio", 00:13:06.598 "conserve_cpu": false, 00:13:06.598 "filename": "/dev/nvme0n1", 00:13:06.598 "name": "xnvme_bdev" 00:13:06.598 }, 00:13:06.598 "method": "bdev_xnvme_create" 00:13:06.598 }, 00:13:06.598 { 00:13:06.598 "method": "bdev_wait_for_examine" 00:13:06.598 } 00:13:06.598 ] 00:13:06.598 } 00:13:06.598 ] 00:13:06.598 } 00:13:06.598 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:06.598 fio-3.35 00:13:06.598 Starting 1 thread 00:13:13.218 00:13:13.218 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82514: Thu Nov 21 03:21:59 2024 00:13:13.218 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(651MiB/5001msec) 00:13:13.218 slat (usec): min=4, max=2130, avg=19.10, stdev=93.14 00:13:13.218 clat (usec): min=107, max=4966, avg=1391.19, stdev=510.36 00:13:13.218 lat (usec): min=205, max=5054, avg=1410.29, stdev=500.51 00:13:13.218 clat percentiles (usec): 00:13:13.218 | 1.00th=[ 297], 5.00th=[ 578], 10.00th=[ 758], 20.00th=[ 963], 00:13:13.218 | 30.00th=[ 1123], 40.00th=[ 1254], 50.00th=[ 1385], 60.00th=[ 1516], 00:13:13.218 | 70.00th=[ 1631], 80.00th=[ 1778], 90.00th=[ 1991], 95.00th=[ 2212], 00:13:13.218 | 99.00th=[ 2835], 99.50th=[ 3130], 99.90th=[ 3752], 99.95th=[ 4015], 00:13:13.218 | 99.99th=[ 4359] 00:13:13.218 bw ( KiB/s): min=128424, max=139800, per=99.81%, avg=132960.00, stdev=4049.25, samples=9 00:13:13.218 iops : min=32106, max=34950, avg=33240.00, stdev=1012.31, samples=9 00:13:13.218 lat (usec) : 250=0.52%, 500=2.92%, 750=6.30%, 1000=12.35% 00:13:13.218 lat (msec) : 2=68.20%, 4=9.67%, 10=0.05% 00:13:13.218 cpu : usr=48.78%, sys=43.56%, ctx=82, majf=0, minf=773 00:13:13.218 IO depths : 1=0.7%, 2=1.5%, 4=3.5%, 8=8.6%, 16=22.4%, 32=61.2%, >=64=2.1% 00:13:13.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.218 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:13.218 issued rwts: total=166548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.218 00:13:13.218 Run status group 0 (all jobs): 00:13:13.219 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=651MiB (682MB), run=5001-5001msec 00:13:13.219 ----------------------------------------------------- 00:13:13.219 Suppressions used: 00:13:13.219 count bytes template 00:13:13.219 1 11 /usr/src/fio/parse.c 00:13:13.219 1 8 libtcmalloc_minimal.so 00:13:13.219 1 904 libcrypto.so 00:13:13.219 ----------------------------------------------------- 00:13:13.219 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.219 03:21:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.219 { 00:13:13.219 "subsystems": [ 00:13:13.219 { 00:13:13.219 "subsystem": "bdev", 00:13:13.219 "config": [ 00:13:13.219 { 00:13:13.219 "params": { 00:13:13.219 "io_mechanism": "libaio", 00:13:13.219 "conserve_cpu": false, 00:13:13.219 "filename": "/dev/nvme0n1", 00:13:13.219 "name": "xnvme_bdev" 00:13:13.219 }, 00:13:13.219 "method": "bdev_xnvme_create" 00:13:13.219 }, 00:13:13.219 { 00:13:13.219 "method": "bdev_wait_for_examine" 00:13:13.219 } 00:13:13.219 ] 00:13:13.219 } 00:13:13.219 ] 00:13:13.219 } 00:13:13.219 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:13.219 fio-3.35 00:13:13.219 Starting 1 thread 00:13:18.505 00:13:18.505 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82600: Thu Nov 21 03:22:05 2024 00:13:18.505 write: IOPS=35.0k, BW=137MiB/s (143MB/s)(684MiB/5003msec); 0 zone resets 00:13:18.505 slat (usec): min=4, max=1995, avg=18.69, stdev=84.33 00:13:18.505 clat (usec): min=107, max=5201, avg=1305.93, stdev=506.86 00:13:18.505 lat (usec): min=208, max=5290, avg=1324.62, stdev=499.24 00:13:18.505 clat percentiles (usec): 00:13:18.505 | 1.00th=[ 297], 5.00th=[ 545], 10.00th=[ 685], 20.00th=[ 881], 00:13:18.505 | 30.00th=[ 1029], 40.00th=[ 1156], 50.00th=[ 1287], 60.00th=[ 1401], 00:13:18.505 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1909], 95.00th=[ 2114], 00:13:18.505 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3851], 99.95th=[ 4080], 00:13:18.505 | 99.99th=[ 4621] 00:13:18.505 bw ( KiB/s): min=132256, max=146840, per=99.74%, avg=139586.67, stdev=4764.58, samples=9 00:13:18.505 iops : min=33064, max=36710, avg=34896.67, stdev=1191.14, samples=9 00:13:18.505 lat (usec) : 250=0.55%, 500=3.51%, 750=8.89%, 1000=14.90% 00:13:18.505 lat (msec) : 2=64.66%, 4=7.42%, 10=0.07% 00:13:18.505 cpu : usr=46.98%, sys=43.60%, ctx=12, majf=0, minf=773 00:13:18.505 IO depths : 1=0.6%, 2=1.4%, 4=3.3%, 8=8.3%, 16=22.3%, 32=62.0%, >=64=2.1% 00:13:18.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.505 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:18.505 issued rwts: total=0,175047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:18.505 00:13:18.505 Run status group 0 (all jobs): 00:13:18.505 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=684MiB (717MB), run=5003-5003msec 00:13:18.505 ----------------------------------------------------- 00:13:18.505 Suppressions used: 00:13:18.505 count bytes template 00:13:18.505 1 11 /usr/src/fio/parse.c 00:13:18.505 1 8 libtcmalloc_minimal.so 00:13:18.505 1 904 libcrypto.so 00:13:18.505 ----------------------------------------------------- 00:13:18.505 00:13:18.505 ************************************ 00:13:18.505 END TEST xnvme_fio_plugin 00:13:18.505 ************************************ 00:13:18.505 00:13:18.505 real 0m12.149s 00:13:18.505 user 0m5.936s 00:13:18.505 sys 0m4.955s 00:13:18.505 03:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.505 03:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 03:22:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:18.767 03:22:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:18.767 03:22:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:18.767 03:22:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:18.767 03:22:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:18.767 03:22:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.767 03:22:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 ************************************ 00:13:18.767 START TEST xnvme_rpc 00:13:18.767 ************************************ 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:18.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82681 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82681 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82681 ']' 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.767 03:22:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 [2024-11-21 03:22:06.198635] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:18.767 [2024-11-21 03:22:06.198791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82681 ] 00:13:19.029 [2024-11-21 03:22:06.335812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:19.029 [2024-11-21 03:22:06.367130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.029 [2024-11-21 03:22:06.395816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 xnvme_bdev 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:19.602 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82681 00:13:19.863 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82681 ']' 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82681 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82681 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.864 killing process with pid 82681 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82681' 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82681 00:13:19.864 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82681 00:13:20.125 00:13:20.125 real 0m1.415s 00:13:20.125 user 0m1.462s 00:13:20.125 sys 0m0.418s 00:13:20.125 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.125 03:22:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.125 ************************************ 00:13:20.125 END TEST xnvme_rpc 00:13:20.125 ************************************ 00:13:20.125 03:22:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:20.125 03:22:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.125 03:22:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.125 03:22:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.125 ************************************ 00:13:20.125 START TEST xnvme_bdevperf 00:13:20.125 ************************************ 00:13:20.125 03:22:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:20.125 03:22:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:20.125 03:22:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:20.125 03:22:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.125 03:22:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:20.126 03:22:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:20.126 03:22:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:20.126 03:22:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.126 { 00:13:20.126 "subsystems": [ 00:13:20.126 { 00:13:20.126 "subsystem": "bdev", 00:13:20.126 "config": [ 00:13:20.126 { 00:13:20.126 "params": { 00:13:20.126 "io_mechanism": "libaio", 00:13:20.126 "conserve_cpu": true, 00:13:20.126 "filename": "/dev/nvme0n1", 00:13:20.126 "name": "xnvme_bdev" 00:13:20.126 }, 00:13:20.126 "method": "bdev_xnvme_create" 00:13:20.126 }, 00:13:20.126 { 00:13:20.126 "method": "bdev_wait_for_examine" 00:13:20.126 } 00:13:20.126 ] 00:13:20.126 } 00:13:20.126 ] 00:13:20.126 } 00:13:20.126 [2024-11-21 03:22:07.669317] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:20.126 [2024-11-21 03:22:07.669453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82733 ] 00:13:20.388 [2024-11-21 03:22:07.804352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.388 [2024-11-21 03:22:07.831753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.388 [2024-11-21 03:22:07.860847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.650 Running I/O for 5 seconds... 00:13:22.535 33025.00 IOPS, 129.00 MiB/s [2024-11-21T03:22:11.042Z] 32445.00 IOPS, 126.74 MiB/s [2024-11-21T03:22:12.422Z] 32466.00 IOPS, 126.82 MiB/s [2024-11-21T03:22:12.995Z] 31975.50 IOPS, 124.90 MiB/s 00:13:25.430 Latency(us) 00:13:25.430 [2024-11-21T03:22:12.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.430 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:25.430 xnvme_bdev : 5.00 31648.83 123.63 0.00 0.00 2017.51 456.86 8267.62 00:13:25.430 [2024-11-21T03:22:12.995Z] =================================================================================================================== 00:13:25.430 [2024-11-21T03:22:12.995Z] Total : 31648.83 123.63 0.00 0.00 2017.51 456.86 8267.62 00:13:25.692 03:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.692 03:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:25.692 03:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:25.692 03:22:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:25.692 03:22:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.692 { 00:13:25.692 "subsystems": [ 00:13:25.692 { 00:13:25.692 "subsystem": "bdev", 00:13:25.692 "config": [ 00:13:25.692 { 00:13:25.692 "params": { 00:13:25.692 "io_mechanism": "libaio", 00:13:25.692 "conserve_cpu": true, 00:13:25.692 "filename": "/dev/nvme0n1", 00:13:25.692 "name": "xnvme_bdev" 00:13:25.692 }, 00:13:25.692 "method": "bdev_xnvme_create" 00:13:25.692 }, 00:13:25.692 { 00:13:25.692 "method": "bdev_wait_for_examine" 00:13:25.692 } 00:13:25.692 ] 00:13:25.692 } 00:13:25.692 ] 00:13:25.692 } 00:13:25.692 [2024-11-21 03:22:13.232523] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:25.692 [2024-11-21 03:22:13.232848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82803 ] 00:13:25.953 [2024-11-21 03:22:13.368815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:25.953 [2024-11-21 03:22:13.399491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.953 [2024-11-21 03:22:13.428453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.214 Running I/O for 5 seconds... 00:13:28.102 35195.00 IOPS, 137.48 MiB/s [2024-11-21T03:22:16.611Z] 35912.50 IOPS, 140.28 MiB/s [2024-11-21T03:22:17.999Z] 35091.33 IOPS, 137.08 MiB/s [2024-11-21T03:22:18.572Z] 35093.25 IOPS, 137.08 MiB/s 00:13:31.007 Latency(us) 00:13:31.007 [2024-11-21T03:22:18.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.007 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:31.007 xnvme_bdev : 5.00 34948.31 136.52 0.00 0.00 1826.93 419.05 9376.69 00:13:31.007 [2024-11-21T03:22:18.572Z] =================================================================================================================== 00:13:31.007 [2024-11-21T03:22:18.572Z] Total : 34948.31 136.52 0.00 0.00 1826.93 419.05 9376.69 00:13:31.269 ************************************ 00:13:31.269 END TEST xnvme_bdevperf 00:13:31.269 ************************************ 00:13:31.269 00:13:31.269 real 0m11.153s 00:13:31.269 user 0m3.773s 00:13:31.269 sys 0m6.001s 00:13:31.269 03:22:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.269 03:22:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.269 03:22:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:31.269 03:22:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.269 03:22:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.269 03:22:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.269 ************************************ 00:13:31.269 START TEST xnvme_fio_plugin 00:13:31.269 ************************************ 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:31.269 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:31.531 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.531 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.531 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:31.531 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.531 03:22:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.531 { 00:13:31.531 "subsystems": [ 00:13:31.531 { 00:13:31.531 "subsystem": "bdev", 00:13:31.531 "config": [ 00:13:31.531 { 00:13:31.531 "params": { 00:13:31.531 "io_mechanism": "libaio", 00:13:31.531 "conserve_cpu": true, 00:13:31.531 "filename": "/dev/nvme0n1", 00:13:31.531 "name": "xnvme_bdev" 00:13:31.531 }, 00:13:31.531 "method": "bdev_xnvme_create" 00:13:31.531 }, 00:13:31.531 { 00:13:31.531 "method": "bdev_wait_for_examine" 00:13:31.531 } 00:13:31.531 ] 00:13:31.531 } 00:13:31.531 ] 00:13:31.531 } 00:13:31.531 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:31.531 fio-3.35 00:13:31.531 Starting 1 thread 00:13:38.130 00:13:38.130 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82911: Thu Nov 21 03:22:24 2024 00:13:38.130 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(658MiB/5001msec) 00:13:38.130 slat (usec): min=4, max=2270, avg=16.32, stdev=87.28 00:13:38.130 clat (usec): min=105, max=5125, avg=1436.16, stdev=481.94 00:13:38.130 lat (usec): min=203, max=5207, avg=1452.48, stdev=472.98 00:13:38.130 clat percentiles (usec): 00:13:38.130 | 1.00th=[ 334], 5.00th=[ 693], 10.00th=[ 848], 20.00th=[ 1037], 00:13:38.130 | 30.00th=[ 1188], 40.00th=[ 1319], 50.00th=[ 1434], 60.00th=[ 1549], 00:13:38.130 | 70.00th=[ 1663], 80.00th=[ 1795], 90.00th=[ 1991], 95.00th=[ 2180], 00:13:38.130 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 3818], 99.95th=[ 3982], 00:13:38.130 | 99.99th=[ 4555] 00:13:38.130 bw ( KiB/s): min=128032, max=140112, per=100.00%, avg=135165.33, stdev=3598.00, samples=9 00:13:38.130 iops : min=32008, max=35028, avg=33791.33, stdev=899.50, samples=9 00:13:38.130 lat (usec) : 250=0.38%, 500=1.85%, 750=4.15%, 1000=11.18% 00:13:38.130 lat (msec) : 2=72.62%, 4=9.78%, 10=0.05% 00:13:38.130 cpu : usr=57.72%, sys=35.72%, ctx=10, majf=0, minf=773 00:13:38.130 IO depths : 1=0.9%, 2=1.9%, 4=4.1%, 8=9.2%, 16=22.5%, 32=59.5%, >=64=2.0% 00:13:38.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.130 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:38.130 issued rwts: total=168447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.130 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:38.130 00:13:38.130 Run status group 0 (all jobs): 00:13:38.130 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=658MiB (690MB), run=5001-5001msec 00:13:38.130 ----------------------------------------------------- 00:13:38.130 Suppressions used: 00:13:38.130 count bytes template 00:13:38.130 1 11 /usr/src/fio/parse.c 00:13:38.130 1 8 libtcmalloc_minimal.so 00:13:38.130 1 904 libcrypto.so 00:13:38.130 ----------------------------------------------------- 00:13:38.130 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.130 03:22:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.130 { 00:13:38.130 "subsystems": [ 00:13:38.130 { 00:13:38.130 "subsystem": "bdev", 00:13:38.130 "config": [ 00:13:38.130 { 00:13:38.130 "params": { 00:13:38.130 "io_mechanism": "libaio", 00:13:38.130 "conserve_cpu": true, 00:13:38.130 "filename": "/dev/nvme0n1", 00:13:38.130 "name": "xnvme_bdev" 00:13:38.130 }, 00:13:38.130 "method": "bdev_xnvme_create" 00:13:38.130 }, 00:13:38.130 { 00:13:38.130 "method": "bdev_wait_for_examine" 00:13:38.130 } 00:13:38.130 ] 00:13:38.130 } 00:13:38.130 ] 00:13:38.130 } 00:13:38.130 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.130 fio-3.35 00:13:38.130 Starting 1 thread 00:13:43.425 00:13:43.425 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82998: Thu Nov 21 03:22:30 2024 00:13:43.425 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(667MiB/5003msec); 0 zone resets 00:13:43.425 slat (usec): min=4, max=2101, avg=17.74, stdev=88.12 00:13:43.425 clat (usec): min=106, max=5531, avg=1380.70, stdev=489.47 00:13:43.425 lat (usec): min=209, max=5692, avg=1398.45, stdev=480.39 00:13:43.425 clat percentiles (usec): 00:13:43.425 | 1.00th=[ 310], 5.00th=[ 611], 10.00th=[ 783], 20.00th=[ 988], 00:13:43.425 | 30.00th=[ 1123], 40.00th=[ 1254], 50.00th=[ 1369], 60.00th=[ 1483], 00:13:43.425 | 70.00th=[ 1614], 80.00th=[ 1745], 90.00th=[ 1958], 95.00th=[ 2180], 00:13:43.425 | 99.00th=[ 2769], 99.50th=[ 3064], 99.90th=[ 3720], 99.95th=[ 3982], 00:13:43.425 | 99.99th=[ 4621] 00:13:43.425 bw ( KiB/s): min=127552, max=142192, per=99.32%, avg=135534.00, stdev=4334.37, samples=9 00:13:43.425 iops : min=31888, max=35548, avg=33883.44, stdev=1083.64, samples=9 00:13:43.425 lat (usec) : 250=0.48%, 500=2.53%, 750=5.97%, 1000=11.91% 00:13:43.425 lat (msec) : 2=70.57%, 4=8.49%, 10=0.05% 00:13:43.425 cpu : usr=51.78%, sys=40.78%, ctx=13, majf=0, minf=773 00:13:43.425 IO depths : 1=0.7%, 2=1.6%, 4=3.6%, 8=8.8%, 16=22.6%, 32=60.6%, >=64=2.1% 00:13:43.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.425 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:43.425 issued rwts: total=0,170682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.425 00:13:43.425 Run status group 0 (all jobs): 00:13:43.425 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=667MiB (699MB), run=5003-5003msec 00:13:43.425 ----------------------------------------------------- 00:13:43.425 Suppressions used: 00:13:43.425 count bytes template 00:13:43.425 1 11 /usr/src/fio/parse.c 00:13:43.425 1 8 libtcmalloc_minimal.so 00:13:43.425 1 904 libcrypto.so 00:13:43.425 ----------------------------------------------------- 00:13:43.425 00:13:43.425 ************************************ 00:13:43.425 END TEST xnvme_fio_plugin 00:13:43.425 ************************************ 00:13:43.425 00:13:43.425 real 0m12.114s 00:13:43.425 user 0m6.620s 00:13:43.425 sys 0m4.413s 00:13:43.425 03:22:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.425 03:22:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:43.723 03:22:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:43.723 03:22:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.723 03:22:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.723 03:22:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.723 ************************************ 00:13:43.723 START TEST xnvme_rpc 00:13:43.723 ************************************ 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83073 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83073 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83073 ']' 00:13:43.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.723 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.723 [2024-11-21 03:22:31.092257] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:43.723 [2024-11-21 03:22:31.092419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83073 ] 00:13:43.723 [2024-11-21 03:22:31.229958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.723 [2024-11-21 03:22:31.260289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.984 [2024-11-21 03:22:31.289872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 xnvme_bdev 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:44.557 03:22:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83073 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83073 ']' 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83073 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.557 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83073 00:13:44.818 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.818 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.818 killing process with pid 83073 00:13:44.818 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83073' 00:13:44.818 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83073 00:13:44.818 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83073 00:13:45.081 00:13:45.081 real 0m1.407s 00:13:45.081 user 0m1.506s 00:13:45.081 sys 0m0.385s 00:13:45.081 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.081 03:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.081 ************************************ 00:13:45.081 END TEST xnvme_rpc 00:13:45.081 ************************************ 00:13:45.081 03:22:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:45.081 03:22:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.081 03:22:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.081 03:22:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.081 ************************************ 00:13:45.081 START TEST xnvme_bdevperf 00:13:45.081 ************************************ 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.081 03:22:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.081 { 00:13:45.081 "subsystems": [ 00:13:45.081 { 00:13:45.081 "subsystem": "bdev", 00:13:45.081 "config": [ 00:13:45.081 { 00:13:45.081 "params": { 00:13:45.081 "io_mechanism": "io_uring", 00:13:45.081 "conserve_cpu": false, 00:13:45.081 "filename": "/dev/nvme0n1", 00:13:45.081 "name": "xnvme_bdev" 00:13:45.081 }, 00:13:45.081 "method": "bdev_xnvme_create" 00:13:45.081 }, 00:13:45.081 { 00:13:45.081 "method": "bdev_wait_for_examine" 00:13:45.081 } 00:13:45.081 ] 00:13:45.081 } 00:13:45.081 ] 00:13:45.081 } 00:13:45.081 [2024-11-21 03:22:32.534950] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:45.081 [2024-11-21 03:22:32.535059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83131 ] 00:13:45.342 [2024-11-21 03:22:32.666830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:45.342 [2024-11-21 03:22:32.698782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.342 [2024-11-21 03:22:32.719911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.342 Running I/O for 5 seconds... 00:13:47.677 32039.00 IOPS, 125.15 MiB/s [2024-11-21T03:22:35.815Z] 31791.00 IOPS, 124.18 MiB/s [2024-11-21T03:22:37.202Z] 31468.67 IOPS, 122.92 MiB/s [2024-11-21T03:22:38.148Z] 31495.25 IOPS, 123.03 MiB/s 00:13:50.583 Latency(us) 00:13:50.583 [2024-11-21T03:22:38.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.583 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.583 xnvme_bdev : 5.00 31466.71 122.92 0.00 0.00 2030.23 1184.69 8217.21 00:13:50.583 [2024-11-21T03:22:38.148Z] =================================================================================================================== 00:13:50.583 [2024-11-21T03:22:38.148Z] Total : 31466.71 122.92 0.00 0.00 2030.23 1184.69 8217.21 00:13:50.583 03:22:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.583 03:22:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.583 03:22:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:50.583 03:22:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.583 03:22:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.583 { 00:13:50.583 "subsystems": [ 00:13:50.583 { 00:13:50.583 "subsystem": "bdev", 00:13:50.583 "config": [ 00:13:50.583 { 00:13:50.583 "params": { 00:13:50.583 "io_mechanism": "io_uring", 00:13:50.583 "conserve_cpu": false, 00:13:50.583 "filename": "/dev/nvme0n1", 00:13:50.583 "name": "xnvme_bdev" 00:13:50.583 }, 00:13:50.583 "method": "bdev_xnvme_create" 00:13:50.583 }, 00:13:50.583 { 00:13:50.583 "method": "bdev_wait_for_examine" 00:13:50.583 } 00:13:50.583 ] 00:13:50.583 } 00:13:50.583 ] 00:13:50.583 } 00:13:50.583 [2024-11-21 03:22:38.057215] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:50.583 [2024-11-21 03:22:38.057361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83195 ] 00:13:50.843 [2024-11-21 03:22:38.193728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:50.843 [2024-11-21 03:22:38.223943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.843 [2024-11-21 03:22:38.253667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.843 Running I/O for 5 seconds... 00:13:52.808 36571.00 IOPS, 142.86 MiB/s [2024-11-21T03:22:41.759Z] 34878.50 IOPS, 136.24 MiB/s [2024-11-21T03:22:42.699Z] 34165.67 IOPS, 133.46 MiB/s [2024-11-21T03:22:43.642Z] 34239.75 IOPS, 133.75 MiB/s [2024-11-21T03:22:43.642Z] 33845.60 IOPS, 132.21 MiB/s 00:13:56.077 Latency(us) 00:13:56.077 [2024-11-21T03:22:43.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.077 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:56.077 xnvme_bdev : 5.00 33838.66 132.18 0.00 0.00 1887.49 365.49 8519.68 00:13:56.077 [2024-11-21T03:22:43.642Z] =================================================================================================================== 00:13:56.077 [2024-11-21T03:22:43.642Z] Total : 33838.66 132.18 0.00 0.00 1887.49 365.49 8519.68 00:13:56.077 00:13:56.077 real 0m11.089s 00:13:56.077 user 0m4.579s 00:13:56.077 sys 0m6.259s 00:13:56.077 ************************************ 00:13:56.077 END TEST xnvme_bdevperf 00:13:56.077 ************************************ 00:13:56.077 03:22:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.077 03:22:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.077 03:22:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:56.077 03:22:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.077 03:22:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.077 03:22:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.077 ************************************ 00:13:56.077 START TEST xnvme_fio_plugin 00:13:56.077 ************************************ 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:56.077 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:56.078 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:56.339 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:56.339 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:56.339 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:56.339 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:56.339 03:22:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.339 { 00:13:56.339 "subsystems": [ 00:13:56.339 { 00:13:56.339 "subsystem": "bdev", 00:13:56.339 "config": [ 00:13:56.339 { 00:13:56.339 "params": { 00:13:56.339 "io_mechanism": "io_uring", 00:13:56.339 "conserve_cpu": false, 00:13:56.339 "filename": "/dev/nvme0n1", 00:13:56.339 "name": "xnvme_bdev" 00:13:56.339 }, 00:13:56.339 "method": "bdev_xnvme_create" 00:13:56.339 }, 00:13:56.339 { 00:13:56.339 "method": "bdev_wait_for_examine" 00:13:56.339 } 00:13:56.339 ] 00:13:56.339 } 00:13:56.339 ] 00:13:56.339 } 00:13:56.339 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:56.339 fio-3.35 00:13:56.339 Starting 1 thread 00:14:02.933 00:14:02.933 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83303: Thu Nov 21 03:22:49 2024 00:14:02.933 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5001msec) 00:14:02.933 slat (nsec): min=2714, max=66110, avg=3524.71, stdev=1991.69 00:14:02.933 clat (usec): min=967, max=3925, avg=1793.60, stdev=304.93 00:14:02.933 lat (usec): min=971, max=3959, avg=1797.12, stdev=305.31 00:14:02.933 clat percentiles (usec): 00:14:02.933 | 1.00th=[ 1205], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1532], 00:14:02.933 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1844], 00:14:02.933 | 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2343], 00:14:02.933 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3228], 00:14:02.933 | 99.99th=[ 3752] 00:14:02.933 bw ( KiB/s): min=125952, max=139264, per=100.00%, avg=132209.78, stdev=4929.43, samples=9 00:14:02.933 iops : min=31488, max=34816, avg=33052.44, stdev=1232.36, samples=9 00:14:02.933 lat (usec) : 1000=0.01% 00:14:02.933 lat (msec) : 2=77.37%, 4=22.62% 00:14:02.933 cpu : usr=32.66%, sys=65.96%, ctx=15, majf=0, minf=771 00:14:02.933 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:02.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.933 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:02.933 issued rwts: total=165120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.933 00:14:02.933 Run status group 0 (all jobs): 00:14:02.933 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (676MB), run=5001-5001msec 00:14:02.933 ----------------------------------------------------- 00:14:02.933 Suppressions used: 00:14:02.933 count bytes template 00:14:02.933 1 11 /usr/src/fio/parse.c 00:14:02.933 1 8 libtcmalloc_minimal.so 00:14:02.933 1 904 libcrypto.so 00:14:02.933 ----------------------------------------------------- 00:14:02.933 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.933 03:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.933 { 00:14:02.933 "subsystems": [ 00:14:02.933 { 00:14:02.933 "subsystem": "bdev", 00:14:02.933 "config": [ 00:14:02.933 { 00:14:02.933 "params": { 00:14:02.933 "io_mechanism": "io_uring", 00:14:02.933 "conserve_cpu": false, 00:14:02.933 "filename": "/dev/nvme0n1", 00:14:02.933 "name": "xnvme_bdev" 00:14:02.933 }, 00:14:02.933 "method": "bdev_xnvme_create" 00:14:02.933 }, 00:14:02.933 { 00:14:02.933 "method": "bdev_wait_for_examine" 00:14:02.933 } 00:14:02.933 ] 00:14:02.933 } 00:14:02.933 ] 00:14:02.933 } 00:14:02.933 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:02.933 fio-3.35 00:14:02.933 Starting 1 thread 00:14:08.234 00:14:08.234 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83384: Thu Nov 21 03:22:55 2024 00:14:08.234 write: IOPS=32.4k, BW=126MiB/s (133MB/s)(633MiB/5002msec); 0 zone resets 00:14:08.234 slat (usec): min=2, max=133, avg= 3.60, stdev= 2.09 00:14:08.234 clat (usec): min=354, max=6008, avg=1828.53, stdev=295.83 00:14:08.234 lat (usec): min=357, max=6011, avg=1832.14, stdev=296.16 00:14:08.234 clat percentiles (usec): 00:14:08.234 | 1.00th=[ 1287], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1582], 00:14:08.234 | 30.00th=[ 1647], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1860], 00:14:08.234 | 70.00th=[ 1958], 80.00th=[ 2057], 90.00th=[ 2212], 95.00th=[ 2343], 00:14:08.234 | 99.00th=[ 2638], 99.50th=[ 2737], 99.90th=[ 3195], 99.95th=[ 3458], 00:14:08.234 | 99.99th=[ 4621] 00:14:08.234 bw ( KiB/s): min=126424, max=131848, per=99.91%, avg=129375.33, stdev=1829.85, samples=9 00:14:08.234 iops : min=31606, max=32962, avg=32343.78, stdev=457.41, samples=9 00:14:08.234 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:08.234 lat (msec) : 2=74.83%, 4=25.11%, 10=0.03% 00:14:08.234 cpu : usr=32.19%, sys=66.47%, ctx=13, majf=0, minf=771 00:14:08.234 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:14:08.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.234 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:08.234 issued rwts: total=0,161930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:08.234 00:14:08.234 Run status group 0 (all jobs): 00:14:08.234 WRITE: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=633MiB (663MB), run=5002-5002msec 00:14:08.234 ----------------------------------------------------- 00:14:08.234 Suppressions used: 00:14:08.234 count bytes template 00:14:08.234 1 11 /usr/src/fio/parse.c 00:14:08.234 1 8 libtcmalloc_minimal.so 00:14:08.234 1 904 libcrypto.so 00:14:08.234 ----------------------------------------------------- 00:14:08.234 00:14:08.234 00:14:08.234 real 0m12.068s 00:14:08.234 user 0m4.433s 00:14:08.234 sys 0m7.183s 00:14:08.234 03:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.234 ************************************ 00:14:08.234 03:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 END TEST xnvme_fio_plugin 00:14:08.234 ************************************ 00:14:08.234 03:22:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:08.234 03:22:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:08.234 03:22:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:08.234 03:22:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:08.234 03:22:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.234 03:22:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.234 03:22:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 ************************************ 00:14:08.234 START TEST xnvme_rpc 00:14:08.234 ************************************ 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83463 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83463 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:08.234 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83463 ']' 00:14:08.235 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.235 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.235 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.235 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.235 03:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.497 [2024-11-21 03:22:55.853710] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:08.497 [2024-11-21 03:22:55.853872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83463 ] 00:14:08.497 [2024-11-21 03:22:55.991317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.497 [2024-11-21 03:22:56.022594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.497 [2024-11-21 03:22:56.054314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.439 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.439 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:09.439 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:09.439 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.439 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 xnvme_bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83463 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83463 ']' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83463 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83463 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.440 killing process with pid 83463 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83463' 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83463 00:14:09.440 03:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83463 00:14:09.702 00:14:09.702 real 0m1.411s 00:14:09.702 user 0m1.484s 00:14:09.702 sys 0m0.419s 00:14:09.702 ************************************ 00:14:09.702 END TEST xnvme_rpc 00:14:09.702 ************************************ 00:14:09.702 03:22:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.702 03:22:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.702 03:22:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:09.702 03:22:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.702 03:22:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.702 03:22:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.702 ************************************ 00:14:09.702 START TEST xnvme_bdevperf 00:14:09.702 ************************************ 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:09.702 03:22:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.964 { 00:14:09.964 "subsystems": [ 00:14:09.964 { 00:14:09.964 "subsystem": "bdev", 00:14:09.964 "config": [ 00:14:09.964 { 00:14:09.964 "params": { 00:14:09.964 "io_mechanism": "io_uring", 00:14:09.964 "conserve_cpu": true, 00:14:09.964 "filename": "/dev/nvme0n1", 00:14:09.964 "name": "xnvme_bdev" 00:14:09.964 }, 00:14:09.964 "method": "bdev_xnvme_create" 00:14:09.964 }, 00:14:09.964 { 00:14:09.964 "method": "bdev_wait_for_examine" 00:14:09.964 } 00:14:09.964 ] 00:14:09.964 } 00:14:09.964 ] 00:14:09.964 } 00:14:09.964 [2024-11-21 03:22:57.310216] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:09.964 [2024-11-21 03:22:57.310379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83522 ] 00:14:09.964 [2024-11-21 03:22:57.446693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.964 [2024-11-21 03:22:57.476600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.964 [2024-11-21 03:22:57.508415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.226 Running I/O for 5 seconds... 00:14:12.115 32947.00 IOPS, 128.70 MiB/s [2024-11-21T03:23:00.626Z] 32538.50 IOPS, 127.10 MiB/s [2024-11-21T03:23:02.016Z] 32833.33 IOPS, 128.26 MiB/s [2024-11-21T03:23:02.960Z] 33261.75 IOPS, 129.93 MiB/s [2024-11-21T03:23:02.960Z] 33572.20 IOPS, 131.14 MiB/s 00:14:15.395 Latency(us) 00:14:15.395 [2024-11-21T03:23:02.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.395 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:15.395 xnvme_bdev : 5.01 33550.76 131.06 0.00 0.00 1903.50 875.91 9880.81 00:14:15.395 [2024-11-21T03:23:02.960Z] =================================================================================================================== 00:14:15.395 [2024-11-21T03:23:02.960Z] Total : 33550.76 131.06 0.00 0.00 1903.50 875.91 9880.81 00:14:15.395 03:23:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.395 03:23:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:15.395 03:23:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:15.395 03:23:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:15.395 03:23:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 { 00:14:15.395 "subsystems": [ 00:14:15.395 { 00:14:15.395 "subsystem": "bdev", 00:14:15.395 "config": [ 00:14:15.395 { 00:14:15.395 "params": { 00:14:15.395 "io_mechanism": "io_uring", 00:14:15.395 "conserve_cpu": true, 00:14:15.395 "filename": "/dev/nvme0n1", 00:14:15.395 "name": "xnvme_bdev" 00:14:15.395 }, 00:14:15.395 "method": "bdev_xnvme_create" 00:14:15.395 }, 00:14:15.395 { 00:14:15.395 "method": "bdev_wait_for_examine" 00:14:15.395 } 00:14:15.395 ] 00:14:15.395 } 00:14:15.395 ] 00:14:15.395 } 00:14:15.395 [2024-11-21 03:23:02.881823] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:15.395 [2024-11-21 03:23:02.881973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83586 ] 00:14:15.656 [2024-11-21 03:23:03.017522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:15.656 [2024-11-21 03:23:03.047677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.656 [2024-11-21 03:23:03.076496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.656 Running I/O for 5 seconds... 00:14:17.984 35827.00 IOPS, 139.95 MiB/s [2024-11-21T03:23:06.494Z] 35764.50 IOPS, 139.71 MiB/s [2024-11-21T03:23:07.438Z] 34949.33 IOPS, 136.52 MiB/s [2024-11-21T03:23:08.383Z] 34446.25 IOPS, 134.56 MiB/s [2024-11-21T03:23:08.383Z] 34298.40 IOPS, 133.98 MiB/s 00:14:20.818 Latency(us) 00:14:20.818 [2024-11-21T03:23:08.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.818 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:20.818 xnvme_bdev : 5.00 34277.25 133.90 0.00 0.00 1863.01 1020.85 5898.24 00:14:20.818 [2024-11-21T03:23:08.383Z] =================================================================================================================== 00:14:20.818 [2024-11-21T03:23:08.383Z] Total : 34277.25 133.90 0.00 0.00 1863.01 1020.85 5898.24 00:14:20.818 00:14:20.818 real 0m11.123s 00:14:20.818 user 0m6.498s 00:14:20.818 sys 0m4.082s 00:14:20.818 03:23:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.818 03:23:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.818 ************************************ 00:14:20.818 END TEST xnvme_bdevperf 00:14:20.818 ************************************ 00:14:21.080 03:23:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:21.080 03:23:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.080 03:23:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.080 03:23:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.080 ************************************ 00:14:21.080 START TEST xnvme_fio_plugin 00:14:21.080 ************************************ 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:21.080 03:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.080 { 00:14:21.080 "subsystems": [ 00:14:21.080 { 00:14:21.080 "subsystem": "bdev", 00:14:21.080 "config": [ 00:14:21.080 { 00:14:21.080 "params": { 00:14:21.080 "io_mechanism": "io_uring", 00:14:21.080 "conserve_cpu": true, 00:14:21.080 "filename": "/dev/nvme0n1", 00:14:21.080 "name": "xnvme_bdev" 00:14:21.080 }, 00:14:21.080 "method": "bdev_xnvme_create" 00:14:21.080 }, 00:14:21.080 { 00:14:21.080 "method": "bdev_wait_for_examine" 00:14:21.080 } 00:14:21.080 ] 00:14:21.080 } 00:14:21.080 ] 00:14:21.080 } 00:14:21.080 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:21.080 fio-3.35 00:14:21.080 Starting 1 thread 00:14:27.700 00:14:27.701 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83693: Thu Nov 21 03:23:14 2024 00:14:27.701 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(635MiB/5002msec) 00:14:27.701 slat (nsec): min=2730, max=89231, avg=3478.32, stdev=1786.77 00:14:27.701 clat (usec): min=954, max=3758, avg=1825.95, stdev=284.48 00:14:27.701 lat (usec): min=957, max=3792, avg=1829.43, stdev=284.80 00:14:27.701 clat percentiles (usec): 00:14:27.701 | 1.00th=[ 1270], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1582], 00:14:27.701 | 30.00th=[ 1663], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1876], 00:14:27.701 | 70.00th=[ 1958], 80.00th=[ 2057], 90.00th=[ 2180], 95.00th=[ 2311], 00:14:27.701 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3195], 99.95th=[ 3326], 00:14:27.701 | 99.99th=[ 3589] 00:14:27.701 bw ( KiB/s): min=126976, max=142336, per=100.00%, avg=130417.78, stdev=4592.17, samples=9 00:14:27.701 iops : min=31744, max=35584, avg=32604.44, stdev=1148.04, samples=9 00:14:27.701 lat (usec) : 1000=0.01% 00:14:27.701 lat (msec) : 2=75.34%, 4=24.65% 00:14:27.701 cpu : usr=56.79%, sys=39.33%, ctx=12, majf=0, minf=771 00:14:27.701 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:27.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.701 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:27.701 issued rwts: total=162656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.701 00:14:27.701 Run status group 0 (all jobs): 00:14:27.701 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=635MiB (666MB), run=5002-5002msec 00:14:27.701 ----------------------------------------------------- 00:14:27.701 Suppressions used: 00:14:27.701 count bytes template 00:14:27.701 1 11 /usr/src/fio/parse.c 00:14:27.701 1 8 libtcmalloc_minimal.so 00:14:27.701 1 904 libcrypto.so 00:14:27.701 ----------------------------------------------------- 00:14:27.701 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:27.701 03:23:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.701 { 00:14:27.701 "subsystems": [ 00:14:27.701 { 00:14:27.701 "subsystem": "bdev", 00:14:27.701 "config": [ 00:14:27.701 { 00:14:27.701 "params": { 00:14:27.701 "io_mechanism": "io_uring", 00:14:27.701 "conserve_cpu": true, 00:14:27.701 "filename": "/dev/nvme0n1", 00:14:27.701 "name": "xnvme_bdev" 00:14:27.701 }, 00:14:27.701 "method": "bdev_xnvme_create" 00:14:27.701 }, 00:14:27.701 { 00:14:27.701 "method": "bdev_wait_for_examine" 00:14:27.701 } 00:14:27.701 ] 00:14:27.701 } 00:14:27.701 ] 00:14:27.701 } 00:14:27.701 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:27.701 fio-3.35 00:14:27.701 Starting 1 thread 00:14:32.978 00:14:32.978 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83775: Thu Nov 21 03:23:20 2024 00:14:32.978 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(719MiB/5001msec); 0 zone resets 00:14:32.978 slat (nsec): min=2780, max=64114, avg=3489.01, stdev=1743.28 00:14:32.978 clat (usec): min=710, max=6394, avg=1599.75, stdev=305.82 00:14:32.978 lat (usec): min=714, max=6397, avg=1603.24, stdev=306.21 00:14:32.978 clat percentiles (usec): 00:14:32.978 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1336], 00:14:32.978 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1647], 00:14:32.978 | 70.00th=[ 1729], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2114], 00:14:32.978 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2933], 99.95th=[ 3458], 00:14:32.978 | 99.99th=[ 5604] 00:14:32.978 bw ( KiB/s): min=130016, max=161656, per=99.07%, avg=145859.44, stdev=13161.91, samples=9 00:14:32.978 iops : min=32504, max=40414, avg=36464.78, stdev=3290.41, samples=9 00:14:32.978 lat (usec) : 750=0.01%, 1000=0.16% 00:14:32.978 lat (msec) : 2=90.25%, 4=9.55%, 10=0.05% 00:14:32.978 cpu : usr=60.34%, sys=36.04%, ctx=16, majf=0, minf=771 00:14:32.978 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:32.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.978 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:32.978 issued rwts: total=0,184064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:32.978 00:14:32.978 Run status group 0 (all jobs): 00:14:32.978 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=719MiB (754MB), run=5001-5001msec 00:14:32.978 ----------------------------------------------------- 00:14:32.978 Suppressions used: 00:14:32.978 count bytes template 00:14:32.978 1 11 /usr/src/fio/parse.c 00:14:32.978 1 8 libtcmalloc_minimal.so 00:14:32.978 1 904 libcrypto.so 00:14:32.978 ----------------------------------------------------- 00:14:32.978 00:14:32.978 00:14:32.978 real 0m11.988s 00:14:32.978 user 0m6.946s 00:14:32.978 sys 0m4.347s 00:14:32.978 03:23:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.978 03:23:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.978 ************************************ 00:14:32.978 END TEST xnvme_fio_plugin 00:14:32.978 ************************************ 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:32.978 03:23:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:32.978 03:23:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.978 03:23:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.978 03:23:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.978 ************************************ 00:14:32.978 START TEST xnvme_rpc 00:14:32.978 ************************************ 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83850 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83850 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83850 ']' 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.978 03:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.237 [2024-11-21 03:23:20.552939] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:33.237 [2024-11-21 03:23:20.553058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83850 ] 00:14:33.237 [2024-11-21 03:23:20.685657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:33.237 [2024-11-21 03:23:20.712722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.237 [2024-11-21 03:23:20.731702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 xnvme_bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83850 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83850 ']' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83850 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83850 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.168 killing process with pid 83850 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83850' 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83850 00:14:34.168 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83850 00:14:34.426 00:14:34.426 real 0m1.315s 00:14:34.426 user 0m1.451s 00:14:34.426 sys 0m0.308s 00:14:34.426 ************************************ 00:14:34.426 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.426 03:23:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.426 END TEST xnvme_rpc 00:14:34.426 ************************************ 00:14:34.426 03:23:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:34.426 03:23:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.426 03:23:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.426 03:23:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.426 ************************************ 00:14:34.426 START TEST xnvme_bdevperf 00:14:34.426 ************************************ 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:34.426 03:23:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.426 { 00:14:34.426 "subsystems": [ 00:14:34.427 { 00:14:34.427 "subsystem": "bdev", 00:14:34.427 "config": [ 00:14:34.427 { 00:14:34.427 "params": { 00:14:34.427 "io_mechanism": "io_uring_cmd", 00:14:34.427 "conserve_cpu": false, 00:14:34.427 "filename": "/dev/ng0n1", 00:14:34.427 "name": "xnvme_bdev" 00:14:34.427 }, 00:14:34.427 "method": "bdev_xnvme_create" 00:14:34.427 }, 00:14:34.427 { 00:14:34.427 "method": "bdev_wait_for_examine" 00:14:34.427 } 00:14:34.427 ] 00:14:34.427 } 00:14:34.427 ] 00:14:34.427 } 00:14:34.427 [2024-11-21 03:23:21.919811] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:34.427 [2024-11-21 03:23:21.919928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83908 ] 00:14:34.684 [2024-11-21 03:23:22.051265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:34.684 [2024-11-21 03:23:22.081043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.684 [2024-11-21 03:23:22.100476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.685 Running I/O for 5 seconds... 00:14:36.622 37974.00 IOPS, 148.34 MiB/s [2024-11-21T03:23:25.568Z] 37823.00 IOPS, 147.75 MiB/s [2024-11-21T03:23:26.511Z] 37500.33 IOPS, 146.49 MiB/s [2024-11-21T03:23:27.191Z] 36697.75 IOPS, 143.35 MiB/s [2024-11-21T03:23:27.191Z] 36534.80 IOPS, 142.71 MiB/s 00:14:39.626 Latency(us) 00:14:39.626 [2024-11-21T03:23:27.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.626 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.626 xnvme_bdev : 5.00 36527.24 142.68 0.00 0.00 1748.45 1077.56 7360.20 00:14:39.626 [2024-11-21T03:23:27.191Z] =================================================================================================================== 00:14:39.626 [2024-11-21T03:23:27.191Z] Total : 36527.24 142.68 0.00 0.00 1748.45 1077.56 7360.20 00:14:39.889 03:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.889 03:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:39.889 03:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:39.889 03:23:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:39.889 03:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.889 { 00:14:39.889 "subsystems": [ 00:14:39.889 { 00:14:39.889 "subsystem": "bdev", 00:14:39.889 "config": [ 00:14:39.889 { 00:14:39.889 "params": { 00:14:39.889 "io_mechanism": "io_uring_cmd", 00:14:39.889 "conserve_cpu": false, 00:14:39.889 "filename": "/dev/ng0n1", 00:14:39.889 "name": "xnvme_bdev" 00:14:39.889 }, 00:14:39.889 "method": "bdev_xnvme_create" 00:14:39.889 }, 00:14:39.889 { 00:14:39.889 "method": "bdev_wait_for_examine" 00:14:39.889 } 00:14:39.889 ] 00:14:39.889 } 00:14:39.889 ] 00:14:39.889 } 00:14:39.889 [2024-11-21 03:23:27.438582] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:39.889 [2024-11-21 03:23:27.438720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83972 ] 00:14:40.150 [2024-11-21 03:23:27.574545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:40.150 [2024-11-21 03:23:27.603930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.150 [2024-11-21 03:23:27.632529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.411 Running I/O for 5 seconds... 00:14:42.296 34374.00 IOPS, 134.27 MiB/s [2024-11-21T03:23:30.803Z] 34918.50 IOPS, 136.40 MiB/s [2024-11-21T03:23:31.746Z] 34466.00 IOPS, 134.63 MiB/s [2024-11-21T03:23:33.134Z] 35483.00 IOPS, 138.61 MiB/s 00:14:45.569 Latency(us) 00:14:45.569 [2024-11-21T03:23:33.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.569 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:45.569 xnvme_bdev : 5.00 36333.17 141.93 0.00 0.00 1757.55 327.68 6755.25 00:14:45.569 [2024-11-21T03:23:33.134Z] =================================================================================================================== 00:14:45.569 [2024-11-21T03:23:33.134Z] Total : 36333.17 141.93 0.00 0.00 1757.55 327.68 6755.25 00:14:45.569 03:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.569 03:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:45.569 03:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:45.569 03:23:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:45.569 03:23:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 { 00:14:45.569 "subsystems": [ 00:14:45.569 { 00:14:45.569 "subsystem": "bdev", 00:14:45.569 "config": [ 00:14:45.569 { 00:14:45.569 "params": { 00:14:45.569 "io_mechanism": "io_uring_cmd", 00:14:45.569 "conserve_cpu": false, 00:14:45.569 "filename": "/dev/ng0n1", 00:14:45.569 "name": "xnvme_bdev" 00:14:45.569 }, 00:14:45.569 "method": "bdev_xnvme_create" 00:14:45.569 }, 00:14:45.569 { 00:14:45.569 "method": "bdev_wait_for_examine" 00:14:45.569 } 00:14:45.569 ] 00:14:45.569 } 00:14:45.569 ] 00:14:45.569 } 00:14:45.569 [2024-11-21 03:23:32.998456] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:45.569 [2024-11-21 03:23:32.998585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84035 ] 00:14:45.831 [2024-11-21 03:23:33.135566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.831 [2024-11-21 03:23:33.167011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.831 [2024-11-21 03:23:33.195988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.831 Running I/O for 5 seconds... 00:14:48.163 77760.00 IOPS, 303.75 MiB/s [2024-11-21T03:23:36.672Z] 78528.00 IOPS, 306.75 MiB/s [2024-11-21T03:23:37.616Z] 78869.33 IOPS, 308.08 MiB/s [2024-11-21T03:23:38.559Z] 79344.00 IOPS, 309.94 MiB/s 00:14:50.994 Latency(us) 00:14:50.994 [2024-11-21T03:23:38.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.994 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:50.994 xnvme_bdev : 5.00 82823.21 323.53 0.00 0.00 769.35 425.35 4032.98 00:14:50.994 [2024-11-21T03:23:38.559Z] =================================================================================================================== 00:14:50.994 [2024-11-21T03:23:38.559Z] Total : 82823.21 323.53 0.00 0.00 769.35 425.35 4032.98 00:14:50.994 03:23:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.994 03:23:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.994 03:23:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.994 03:23:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:50.994 03:23:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.994 { 00:14:50.994 "subsystems": [ 00:14:50.994 { 00:14:50.994 "subsystem": "bdev", 00:14:50.994 "config": [ 00:14:50.994 { 00:14:50.994 "params": { 00:14:50.994 "io_mechanism": "io_uring_cmd", 00:14:50.994 "conserve_cpu": false, 00:14:50.994 "filename": "/dev/ng0n1", 00:14:50.994 "name": "xnvme_bdev" 00:14:50.994 }, 00:14:50.994 "method": "bdev_xnvme_create" 00:14:50.994 }, 00:14:50.994 { 00:14:50.994 "method": "bdev_wait_for_examine" 00:14:50.994 } 00:14:50.994 ] 00:14:50.994 } 00:14:50.994 ] 00:14:50.994 } 00:14:50.994 [2024-11-21 03:23:38.478750] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:50.994 [2024-11-21 03:23:38.478851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:14:51.256 [2024-11-21 03:23:38.611153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:51.256 [2024-11-21 03:23:38.638697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.256 [2024-11-21 03:23:38.659623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.256 Running I/O for 5 seconds... 00:14:53.583 867.00 IOPS, 3.39 MiB/s [2024-11-21T03:23:42.092Z] 514.00 IOPS, 2.01 MiB/s [2024-11-21T03:23:43.035Z] 558.67 IOPS, 2.18 MiB/s [2024-11-21T03:23:43.976Z] 455.00 IOPS, 1.78 MiB/s [2024-11-21T03:23:44.236Z] 505.20 IOPS, 1.97 MiB/s 00:14:56.671 Latency(us) 00:14:56.671 [2024-11-21T03:23:44.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.671 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:56.671 xnvme_bdev : 5.44 475.68 1.86 0.00 0.00 129042.01 114.22 674315.03 00:14:56.671 [2024-11-21T03:23:44.236Z] =================================================================================================================== 00:14:56.671 [2024-11-21T03:23:44.236Z] Total : 475.68 1.86 0.00 0.00 129042.01 114.22 674315.03 00:14:56.931 00:14:56.931 real 0m22.458s 00:14:56.931 user 0m11.738s 00:14:56.931 sys 0m10.277s 00:14:56.931 03:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.931 03:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.931 ************************************ 00:14:56.931 END TEST xnvme_bdevperf 00:14:56.931 ************************************ 00:14:56.931 03:23:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:56.931 03:23:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.931 03:23:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.931 03:23:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.931 ************************************ 00:14:56.931 START TEST xnvme_fio_plugin 00:14:56.931 ************************************ 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.931 03:23:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.931 { 00:14:56.931 "subsystems": [ 00:14:56.931 { 00:14:56.931 "subsystem": "bdev", 00:14:56.931 "config": [ 00:14:56.931 { 00:14:56.931 "params": { 00:14:56.931 "io_mechanism": "io_uring_cmd", 00:14:56.931 "conserve_cpu": false, 00:14:56.931 "filename": "/dev/ng0n1", 00:14:56.931 "name": "xnvme_bdev" 00:14:56.931 }, 00:14:56.931 "method": "bdev_xnvme_create" 00:14:56.931 }, 00:14:56.931 { 00:14:56.931 "method": "bdev_wait_for_examine" 00:14:56.931 } 00:14:56.931 ] 00:14:56.931 } 00:14:56.931 ] 00:14:56.931 } 00:14:57.191 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:57.191 fio-3.35 00:14:57.191 Starting 1 thread 00:15:02.504 00:15:02.504 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84212: Thu Nov 21 03:23:49 2024 00:15:02.504 read: IOPS=42.5k, BW=166MiB/s (174MB/s)(830MiB/5002msec) 00:15:02.504 slat (nsec): min=2741, max=66063, avg=3529.86, stdev=1861.27 00:15:02.504 clat (usec): min=390, max=5207, avg=1364.79, stdev=388.41 00:15:02.504 lat (usec): min=400, max=5210, avg=1368.32, stdev=388.70 00:15:02.504 clat percentiles (usec): 00:15:02.504 | 1.00th=[ 676], 5.00th=[ 734], 10.00th=[ 799], 20.00th=[ 955], 00:15:02.504 | 30.00th=[ 1156], 40.00th=[ 1287], 50.00th=[ 1401], 60.00th=[ 1500], 00:15:02.504 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1844], 95.00th=[ 1958], 00:15:02.504 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 2769], 99.95th=[ 2933], 00:15:02.504 | 99.99th=[ 3785] 00:15:02.504 bw ( KiB/s): min=143872, max=249344, per=100.00%, avg=171659.56, stdev=37592.91, samples=9 00:15:02.504 iops : min=35968, max=62336, avg=42914.89, stdev=9398.23, samples=9 00:15:02.504 lat (usec) : 500=0.01%, 750=6.14%, 1000=15.49% 00:15:02.504 lat (msec) : 2=74.34%, 4=4.03%, 10=0.01% 00:15:02.504 cpu : usr=38.37%, sys=60.53%, ctx=7, majf=0, minf=771 00:15:02.504 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:02.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.504 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:02.504 issued rwts: total=212588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:02.504 00:15:02.504 Run status group 0 (all jobs): 00:15:02.505 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=830MiB (871MB), run=5002-5002msec 00:15:02.766 ----------------------------------------------------- 00:15:02.766 Suppressions used: 00:15:02.766 count bytes template 00:15:02.766 1 11 /usr/src/fio/parse.c 00:15:02.766 1 8 libtcmalloc_minimal.so 00:15:02.766 1 904 libcrypto.so 00:15:02.766 ----------------------------------------------------- 00:15:02.766 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:02.766 03:23:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.766 { 00:15:02.766 "subsystems": [ 00:15:02.766 { 00:15:02.766 "subsystem": "bdev", 00:15:02.766 "config": [ 00:15:02.766 { 00:15:02.766 "params": { 00:15:02.766 "io_mechanism": "io_uring_cmd", 00:15:02.766 "conserve_cpu": false, 00:15:02.766 "filename": "/dev/ng0n1", 00:15:02.766 "name": "xnvme_bdev" 00:15:02.766 }, 00:15:02.766 "method": "bdev_xnvme_create" 00:15:02.766 }, 00:15:02.766 { 00:15:02.766 "method": "bdev_wait_for_examine" 00:15:02.766 } 00:15:02.766 ] 00:15:02.766 } 00:15:02.766 ] 00:15:02.766 } 00:15:03.027 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:03.027 fio-3.35 00:15:03.027 Starting 1 thread 00:15:08.372 00:15:08.372 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84297: Thu Nov 21 03:23:55 2024 00:15:08.372 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5002msec); 0 zone resets 00:15:08.372 slat (nsec): min=2795, max=85907, avg=3743.97, stdev=2263.04 00:15:08.372 clat (usec): min=52, max=14307, avg=1759.41, stdev=1055.10 00:15:08.372 lat (usec): min=55, max=14310, avg=1763.15, stdev=1055.21 00:15:08.372 clat percentiles (usec): 00:15:08.372 | 1.00th=[ 570], 5.00th=[ 930], 10.00th=[ 1139], 20.00th=[ 1336], 00:15:08.372 | 30.00th=[ 1450], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1680], 00:15:08.372 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2089], 95.00th=[ 2507], 00:15:08.372 | 99.00th=[ 7570], 99.50th=[ 8717], 99.90th=[10421], 99.95th=[10814], 00:15:08.372 | 99.99th=[12256] 00:15:08.372 bw ( KiB/s): min=120856, max=138792, per=99.59%, avg=134253.33, stdev=5652.88, samples=9 00:15:08.372 iops : min=30214, max=34698, avg=33563.33, stdev=1413.22, samples=9 00:15:08.372 lat (usec) : 100=0.01%, 250=0.11%, 500=0.67%, 750=1.61%, 1000=3.95% 00:15:08.372 lat (msec) : 2=80.34%, 4=9.96%, 10=3.17%, 20=0.18% 00:15:08.372 cpu : usr=36.51%, sys=62.03%, ctx=15, majf=0, minf=771 00:15:08.372 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=10.3%, 16=22.7%, 32=56.1%, >=64=2.4% 00:15:08.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.372 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:15:08.372 issued rwts: total=0,168579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.372 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.372 00:15:08.372 Run status group 0 (all jobs): 00:15:08.372 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (690MB), run=5002-5002msec 00:15:08.945 ----------------------------------------------------- 00:15:08.945 Suppressions used: 00:15:08.945 count bytes template 00:15:08.945 1 11 /usr/src/fio/parse.c 00:15:08.945 1 8 libtcmalloc_minimal.so 00:15:08.945 1 904 libcrypto.so 00:15:08.945 ----------------------------------------------------- 00:15:08.945 00:15:08.945 00:15:08.945 real 0m11.930s 00:15:08.945 user 0m4.866s 00:15:08.945 sys 0m6.623s 00:15:08.945 03:23:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.945 03:23:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.945 ************************************ 00:15:08.945 END TEST xnvme_fio_plugin 00:15:08.945 ************************************ 00:15:08.945 03:23:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:08.945 03:23:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:08.945 03:23:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:08.945 03:23:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:08.945 03:23:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.945 03:23:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.945 03:23:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.945 ************************************ 00:15:08.945 START TEST xnvme_rpc 00:15:08.945 ************************************ 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84371 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84371 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84371 ']' 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.945 03:23:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.945 [2024-11-21 03:23:56.467025] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:08.945 [2024-11-21 03:23:56.467181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84371 ] 00:15:09.206 [2024-11-21 03:23:56.605289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.206 [2024-11-21 03:23:56.640819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.206 [2024-11-21 03:23:56.670825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.777 xnvme_bdev 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.777 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84371 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84371 ']' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84371 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84371 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.039 killing process with pid 84371 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84371' 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84371 00:15:10.039 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84371 00:15:10.299 00:15:10.299 real 0m1.420s 00:15:10.299 user 0m1.495s 00:15:10.299 sys 0m0.419s 00:15:10.299 ************************************ 00:15:10.299 END TEST xnvme_rpc 00:15:10.299 ************************************ 00:15:10.299 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.300 03:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.300 03:23:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:10.300 03:23:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:10.300 03:23:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.300 03:23:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.560 ************************************ 00:15:10.560 START TEST xnvme_bdevperf 00:15:10.560 ************************************ 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:10.560 03:23:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:10.560 { 00:15:10.560 "subsystems": [ 00:15:10.560 { 00:15:10.560 "subsystem": "bdev", 00:15:10.560 "config": [ 00:15:10.560 { 00:15:10.560 "params": { 00:15:10.560 "io_mechanism": "io_uring_cmd", 00:15:10.560 "conserve_cpu": true, 00:15:10.560 "filename": "/dev/ng0n1", 00:15:10.560 "name": "xnvme_bdev" 00:15:10.560 }, 00:15:10.560 "method": "bdev_xnvme_create" 00:15:10.560 }, 00:15:10.560 { 00:15:10.560 "method": "bdev_wait_for_examine" 00:15:10.560 } 00:15:10.560 ] 00:15:10.560 } 00:15:10.560 ] 00:15:10.560 } 00:15:10.561 [2024-11-21 03:23:57.934478] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:10.561 [2024-11-21 03:23:57.934609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84429 ] 00:15:10.561 [2024-11-21 03:23:58.071503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:10.561 [2024-11-21 03:23:58.100711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.822 [2024-11-21 03:23:58.142408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.822 Running I/O for 5 seconds... 00:15:13.160 40000.00 IOPS, 156.25 MiB/s [2024-11-21T03:24:01.298Z] 38716.00 IOPS, 151.23 MiB/s [2024-11-21T03:24:02.679Z] 37804.00 IOPS, 147.67 MiB/s [2024-11-21T03:24:03.623Z] 37242.00 IOPS, 145.48 MiB/s 00:15:16.058 Latency(us) 00:15:16.058 [2024-11-21T03:24:03.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.058 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:16.058 xnvme_bdev : 5.00 36740.66 143.52 0.00 0.00 1737.89 831.80 10485.76 00:15:16.058 [2024-11-21T03:24:03.623Z] =================================================================================================================== 00:15:16.058 [2024-11-21T03:24:03.623Z] Total : 36740.66 143.52 0.00 0.00 1737.89 831.80 10485.76 00:15:16.058 03:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.058 03:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:16.058 03:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:16.058 03:24:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:16.058 03:24:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:16.058 { 00:15:16.058 "subsystems": [ 00:15:16.058 { 00:15:16.058 "subsystem": "bdev", 00:15:16.058 "config": [ 00:15:16.058 { 00:15:16.058 "params": { 00:15:16.058 "io_mechanism": "io_uring_cmd", 00:15:16.058 "conserve_cpu": true, 00:15:16.058 "filename": "/dev/ng0n1", 00:15:16.058 "name": "xnvme_bdev" 00:15:16.058 }, 00:15:16.058 "method": "bdev_xnvme_create" 00:15:16.058 }, 00:15:16.058 { 00:15:16.058 "method": "bdev_wait_for_examine" 00:15:16.058 } 00:15:16.058 ] 00:15:16.058 } 00:15:16.058 ] 00:15:16.058 } 00:15:16.058 [2024-11-21 03:24:03.553391] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:16.058 [2024-11-21 03:24:03.553530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84492 ] 00:15:16.320 [2024-11-21 03:24:03.691008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:16.320 [2024-11-21 03:24:03.720798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.320 [2024-11-21 03:24:03.750541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.320 Running I/O for 5 seconds... 00:15:18.649 31849.00 IOPS, 124.41 MiB/s [2024-11-21T03:24:07.156Z] 32945.00 IOPS, 128.69 MiB/s [2024-11-21T03:24:08.094Z] 32717.33 IOPS, 127.80 MiB/s [2024-11-21T03:24:09.037Z] 32697.25 IOPS, 127.72 MiB/s 00:15:21.472 Latency(us) 00:15:21.472 [2024-11-21T03:24:09.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.472 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:21.472 xnvme_bdev : 5.00 32841.81 128.29 0.00 0.00 1944.61 64.98 19055.85 00:15:21.472 [2024-11-21T03:24:09.037Z] =================================================================================================================== 00:15:21.472 [2024-11-21T03:24:09.037Z] Total : 32841.81 128.29 0.00 0.00 1944.61 64.98 19055.85 00:15:21.740 03:24:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.740 03:24:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:21.740 03:24:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.740 03:24:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.740 03:24:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.740 { 00:15:21.740 "subsystems": [ 00:15:21.740 { 00:15:21.740 "subsystem": "bdev", 00:15:21.740 "config": [ 00:15:21.740 { 00:15:21.740 "params": { 00:15:21.740 "io_mechanism": "io_uring_cmd", 00:15:21.740 "conserve_cpu": true, 00:15:21.740 "filename": "/dev/ng0n1", 00:15:21.740 "name": "xnvme_bdev" 00:15:21.740 }, 00:15:21.740 "method": "bdev_xnvme_create" 00:15:21.740 }, 00:15:21.740 { 00:15:21.740 "method": "bdev_wait_for_examine" 00:15:21.740 } 00:15:21.740 ] 00:15:21.740 } 00:15:21.740 ] 00:15:21.740 } 00:15:21.740 [2024-11-21 03:24:09.116001] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:21.740 [2024-11-21 03:24:09.116129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84568 ] 00:15:21.740 [2024-11-21 03:24:09.251545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:21.740 [2024-11-21 03:24:09.283288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.004 [2024-11-21 03:24:09.313065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.004 Running I/O for 5 seconds... 00:15:23.893 80128.00 IOPS, 313.00 MiB/s [2024-11-21T03:24:12.845Z] 79968.00 IOPS, 312.38 MiB/s [2024-11-21T03:24:13.789Z] 80170.67 IOPS, 313.17 MiB/s [2024-11-21T03:24:14.732Z] 80240.00 IOPS, 313.44 MiB/s 00:15:27.167 Latency(us) 00:15:27.167 [2024-11-21T03:24:14.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.167 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:27.167 xnvme_bdev : 5.00 80159.38 313.12 0.00 0.00 795.00 392.27 2709.66 00:15:27.167 [2024-11-21T03:24:14.732Z] =================================================================================================================== 00:15:27.167 [2024-11-21T03:24:14.732Z] Total : 80159.38 313.12 0.00 0.00 795.00 392.27 2709.66 00:15:27.167 03:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.167 03:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:27.167 03:24:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:27.167 03:24:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:27.167 03:24:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.167 { 00:15:27.167 "subsystems": [ 00:15:27.167 { 00:15:27.167 "subsystem": "bdev", 00:15:27.167 "config": [ 00:15:27.167 { 00:15:27.167 "params": { 00:15:27.167 "io_mechanism": "io_uring_cmd", 00:15:27.167 "conserve_cpu": true, 00:15:27.167 "filename": "/dev/ng0n1", 00:15:27.167 "name": "xnvme_bdev" 00:15:27.167 }, 00:15:27.167 "method": "bdev_xnvme_create" 00:15:27.167 }, 00:15:27.167 { 00:15:27.167 "method": "bdev_wait_for_examine" 00:15:27.167 } 00:15:27.167 ] 00:15:27.167 } 00:15:27.167 ] 00:15:27.167 } 00:15:27.167 [2024-11-21 03:24:14.675699] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:27.167 [2024-11-21 03:24:14.675842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84633 ] 00:15:27.428 [2024-11-21 03:24:14.812026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:27.428 [2024-11-21 03:24:14.843436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.428 [2024-11-21 03:24:14.872597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.428 Running I/O for 5 seconds... 00:15:29.855 51551.00 IOPS, 201.37 MiB/s [2024-11-21T03:24:18.018Z] 53106.50 IOPS, 207.45 MiB/s [2024-11-21T03:24:19.411Z] 50015.00 IOPS, 195.37 MiB/s [2024-11-21T03:24:19.986Z] 47203.00 IOPS, 184.39 MiB/s [2024-11-21T03:24:19.986Z] 45610.80 IOPS, 178.17 MiB/s 00:15:32.421 Latency(us) 00:15:32.421 [2024-11-21T03:24:19.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.421 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:32.421 xnvme_bdev : 5.01 45550.51 177.93 0.00 0.00 1398.47 225.28 18450.90 00:15:32.421 [2024-11-21T03:24:19.986Z] =================================================================================================================== 00:15:32.421 [2024-11-21T03:24:19.986Z] Total : 45550.51 177.93 0.00 0.00 1398.47 225.28 18450.90 00:15:32.683 ************************************ 00:15:32.683 END TEST xnvme_bdevperf 00:15:32.683 ************************************ 00:15:32.683 00:15:32.683 real 0m22.292s 00:15:32.683 user 0m13.498s 00:15:32.683 sys 0m6.318s 00:15:32.683 03:24:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.683 03:24:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:32.683 03:24:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:32.683 03:24:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.683 03:24:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.683 03:24:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.683 ************************************ 00:15:32.683 START TEST xnvme_fio_plugin 00:15:32.683 ************************************ 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:32.683 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:32.945 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:32.946 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:32.946 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:32.946 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:32.946 03:24:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:32.946 { 00:15:32.946 "subsystems": [ 00:15:32.946 { 00:15:32.946 "subsystem": "bdev", 00:15:32.946 "config": [ 00:15:32.946 { 00:15:32.946 "params": { 00:15:32.946 "io_mechanism": "io_uring_cmd", 00:15:32.946 "conserve_cpu": true, 00:15:32.946 "filename": "/dev/ng0n1", 00:15:32.946 "name": "xnvme_bdev" 00:15:32.946 }, 00:15:32.946 "method": "bdev_xnvme_create" 00:15:32.946 }, 00:15:32.946 { 00:15:32.946 "method": "bdev_wait_for_examine" 00:15:32.946 } 00:15:32.946 ] 00:15:32.946 } 00:15:32.946 ] 00:15:32.946 } 00:15:32.946 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:32.946 fio-3.35 00:15:32.946 Starting 1 thread 00:15:39.536 00:15:39.536 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84735: Thu Nov 21 03:24:25 2024 00:15:39.536 read: IOPS=38.9k, BW=152MiB/s (159MB/s)(760MiB/5001msec) 00:15:39.536 slat (usec): min=2, max=185, avg= 3.48, stdev= 1.73 00:15:39.536 clat (usec): min=427, max=3420, avg=1503.94, stdev=281.91 00:15:39.536 lat (usec): min=429, max=3451, avg=1507.42, stdev=282.26 00:15:39.536 clat percentiles (usec): 00:15:39.536 | 1.00th=[ 1045], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1254], 00:15:39.536 | 30.00th=[ 1319], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1549], 00:15:39.536 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1893], 95.00th=[ 2024], 00:15:39.536 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2638], 99.95th=[ 2737], 00:15:39.536 | 99.99th=[ 3228] 00:15:39.536 bw ( KiB/s): min=140288, max=178688, per=100.00%, avg=155761.78, stdev=14273.64, samples=9 00:15:39.536 iops : min=35072, max=44672, avg=38940.44, stdev=3568.41, samples=9 00:15:39.536 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.34% 00:15:39.536 lat (msec) : 2=94.03%, 4=5.61% 00:15:39.536 cpu : usr=60.32%, sys=36.64%, ctx=9, majf=0, minf=771 00:15:39.536 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:39.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.536 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:39.536 issued rwts: total=194588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:39.536 00:15:39.536 Run status group 0 (all jobs): 00:15:39.536 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=760MiB (797MB), run=5001-5001msec 00:15:39.536 ----------------------------------------------------- 00:15:39.536 Suppressions used: 00:15:39.536 count bytes template 00:15:39.536 1 11 /usr/src/fio/parse.c 00:15:39.536 1 8 libtcmalloc_minimal.so 00:15:39.536 1 904 libcrypto.so 00:15:39.536 ----------------------------------------------------- 00:15:39.536 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:39.536 03:24:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.536 { 00:15:39.536 "subsystems": [ 00:15:39.536 { 00:15:39.536 "subsystem": "bdev", 00:15:39.536 "config": [ 00:15:39.536 { 00:15:39.536 "params": { 00:15:39.536 "io_mechanism": "io_uring_cmd", 00:15:39.536 "conserve_cpu": true, 00:15:39.536 "filename": "/dev/ng0n1", 00:15:39.536 "name": "xnvme_bdev" 00:15:39.536 }, 00:15:39.536 "method": "bdev_xnvme_create" 00:15:39.536 }, 00:15:39.536 { 00:15:39.536 "method": "bdev_wait_for_examine" 00:15:39.536 } 00:15:39.536 ] 00:15:39.536 } 00:15:39.536 ] 00:15:39.536 } 00:15:39.536 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:39.536 fio-3.35 00:15:39.536 Starting 1 thread 00:15:44.821 00:15:44.821 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84820: Thu Nov 21 03:24:31 2024 00:15:44.821 write: IOPS=40.1k, BW=157MiB/s (164MB/s)(784MiB/5007msec); 0 zone resets 00:15:44.821 slat (usec): min=2, max=303, avg= 3.95, stdev= 2.34 00:15:44.821 clat (usec): min=636, max=7539, avg=1440.74, stdev=281.33 00:15:44.821 lat (usec): min=639, max=7543, avg=1444.69, stdev=281.87 00:15:44.821 clat percentiles (usec): 00:15:44.821 | 1.00th=[ 1012], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1221], 00:15:44.821 | 30.00th=[ 1287], 40.00th=[ 1352], 50.00th=[ 1418], 60.00th=[ 1483], 00:15:44.821 | 70.00th=[ 1549], 80.00th=[ 1631], 90.00th=[ 1778], 95.00th=[ 1893], 00:15:44.821 | 99.00th=[ 2180], 99.50th=[ 2409], 99.90th=[ 3261], 99.95th=[ 3556], 00:15:44.821 | 99.99th=[ 7439] 00:15:44.821 bw ( KiB/s): min=146944, max=184400, per=100.00%, avg=160471.20, stdev=13698.38, samples=10 00:15:44.821 iops : min=36736, max=46100, avg=40117.80, stdev=3424.59, samples=10 00:15:44.821 lat (usec) : 750=0.02%, 1000=0.80% 00:15:44.822 lat (msec) : 2=96.56%, 4=2.58%, 10=0.04% 00:15:44.822 cpu : usr=51.58%, sys=42.29%, ctx=14, majf=0, minf=771 00:15:44.822 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.4%, 16=25.0%, 32=50.5%, >=64=1.7% 00:15:44.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.822 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:44.822 issued rwts: total=0,200644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:44.822 00:15:44.822 Run status group 0 (all jobs): 00:15:44.822 WRITE: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=784MiB (822MB), run=5007-5007msec 00:15:44.822 ----------------------------------------------------- 00:15:44.822 Suppressions used: 00:15:44.822 count bytes template 00:15:44.822 1 11 /usr/src/fio/parse.c 00:15:44.822 1 8 libtcmalloc_minimal.so 00:15:44.822 1 904 libcrypto.so 00:15:44.822 ----------------------------------------------------- 00:15:44.822 00:15:44.822 00:15:44.822 real 0m12.069s 00:15:44.822 user 0m6.768s 00:15:44.822 sys 0m4.524s 00:15:44.822 03:24:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.822 ************************************ 00:15:44.822 END TEST xnvme_fio_plugin 00:15:44.822 ************************************ 00:15:44.822 03:24:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.822 03:24:32 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84371 00:15:44.822 03:24:32 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84371 ']' 00:15:44.822 03:24:32 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84371 00:15:44.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84371) - No such process 00:15:44.822 Process with pid 84371 is not found 00:15:44.822 03:24:32 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84371 is not found' 00:15:44.822 00:15:44.822 real 2m58.135s 00:15:44.822 user 1m28.967s 00:15:44.822 sys 1m14.953s 00:15:44.822 03:24:32 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.822 ************************************ 00:15:44.822 END TEST nvme_xnvme 00:15:44.822 ************************************ 00:15:44.822 03:24:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.083 03:24:32 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:45.083 03:24:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.083 03:24:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.083 03:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:45.083 ************************************ 00:15:45.083 START TEST blockdev_xnvme 00:15:45.083 ************************************ 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:45.083 * Looking for test storage... 00:15:45.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.083 03:24:32 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:45.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.083 --rc genhtml_branch_coverage=1 00:15:45.083 --rc genhtml_function_coverage=1 00:15:45.083 --rc genhtml_legend=1 00:15:45.083 --rc geninfo_all_blocks=1 00:15:45.083 --rc geninfo_unexecuted_blocks=1 00:15:45.083 00:15:45.083 ' 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:45.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.083 --rc genhtml_branch_coverage=1 00:15:45.083 --rc genhtml_function_coverage=1 00:15:45.083 --rc genhtml_legend=1 00:15:45.083 --rc geninfo_all_blocks=1 00:15:45.083 --rc geninfo_unexecuted_blocks=1 00:15:45.083 00:15:45.083 ' 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:45.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.083 --rc genhtml_branch_coverage=1 00:15:45.083 --rc genhtml_function_coverage=1 00:15:45.083 --rc genhtml_legend=1 00:15:45.083 --rc geninfo_all_blocks=1 00:15:45.083 --rc geninfo_unexecuted_blocks=1 00:15:45.083 00:15:45.083 ' 00:15:45.083 03:24:32 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:45.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.083 --rc genhtml_branch_coverage=1 00:15:45.084 --rc genhtml_function_coverage=1 00:15:45.084 --rc genhtml_legend=1 00:15:45.084 --rc geninfo_all_blocks=1 00:15:45.084 --rc geninfo_unexecuted_blocks=1 00:15:45.084 00:15:45.084 ' 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84949 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84949 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 84949 ']' 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.084 03:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.084 03:24:32 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:45.345 [2024-11-21 03:24:32.646602] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:45.345 [2024-11-21 03:24:32.646756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84949 ] 00:15:45.345 [2024-11-21 03:24:32.783368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:45.346 [2024-11-21 03:24:32.812994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.346 [2024-11-21 03:24:32.841774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.289 03:24:33 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.289 03:24:33 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:46.289 03:24:33 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:46.289 03:24:33 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:46.289 03:24:33 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:46.289 03:24:33 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:46.289 03:24:33 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:46.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.124 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:47.124 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:47.124 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:47.124 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:47.124 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.124 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:47.125 nvme0n1 00:15:47.125 nvme0n2 00:15:47.125 nvme0n3 00:15:47.125 nvme1n1 00:15:47.125 nvme2n1 00:15:47.125 nvme3n1 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:47.125 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.125 03:24:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "259d7aac-dc51-4485-b3a1-b9d822704ed9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "259d7aac-dc51-4485-b3a1-b9d822704ed9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c5b93bdf-87e6-430c-9fb1-8881ca16986d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5b93bdf-87e6-430c-9fb1-8881ca16986d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a5182356-ee53-4bc1-b1d1-1e1f7b2566cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5182356-ee53-4bc1-b1d1-1e1f7b2566cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "727e3318-d855-4893-8849-519b60dfcfb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "727e3318-d855-4893-8849-519b60dfcfb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "84d7e84a-d408-4118-aab1-37fdc5816cab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "84d7e84a-d408-4118-aab1-37fdc5816cab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a94f48b4-329c-4538-a4f7-4a6907e0b325"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a94f48b4-329c-4538-a4f7-4a6907e0b325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:47.388 03:24:34 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 84949 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84949 ']' 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 84949 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84949 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.388 killing process with pid 84949 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84949' 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 84949 00:15:47.388 03:24:34 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 84949 00:15:47.650 03:24:35 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:47.650 03:24:35 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:47.650 03:24:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.650 03:24:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.650 03:24:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.650 ************************************ 00:15:47.650 START TEST bdev_hello_world 00:15:47.650 ************************************ 00:15:47.650 03:24:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:47.650 [2024-11-21 03:24:35.144457] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:47.650 [2024-11-21 03:24:35.144593] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85211 ] 00:15:47.910 [2024-11-21 03:24:35.280518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:47.910 [2024-11-21 03:24:35.311344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.910 [2024-11-21 03:24:35.339824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.171 [2024-11-21 03:24:35.562233] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:48.171 [2024-11-21 03:24:35.562287] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:48.171 [2024-11-21 03:24:35.562307] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:48.171 [2024-11-21 03:24:35.564545] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:48.171 [2024-11-21 03:24:35.565577] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:48.171 [2024-11-21 03:24:35.565771] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:48.171 [2024-11-21 03:24:35.566192] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:48.171 00:15:48.171 [2024-11-21 03:24:35.566224] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:48.434 00:15:48.434 real 0m0.675s 00:15:48.434 user 0m0.339s 00:15:48.434 sys 0m0.192s 00:15:48.434 03:24:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.434 ************************************ 00:15:48.434 END TEST bdev_hello_world 00:15:48.434 ************************************ 00:15:48.434 03:24:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 03:24:35 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:48.434 03:24:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.434 03:24:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.434 03:24:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 ************************************ 00:15:48.434 START TEST bdev_bounds 00:15:48.434 ************************************ 00:15:48.434 Process bdevio pid: 85242 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85242 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85242' 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85242 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85242 ']' 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.434 03:24:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 [2024-11-21 03:24:35.888267] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:48.434 [2024-11-21 03:24:35.888645] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85242 ] 00:15:48.696 [2024-11-21 03:24:36.025740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:48.696 [2024-11-21 03:24:36.053671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.696 [2024-11-21 03:24:36.085372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.696 [2024-11-21 03:24:36.085671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.696 [2024-11-21 03:24:36.085699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.268 03:24:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.268 03:24:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:49.268 03:24:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:49.530 I/O targets: 00:15:49.530 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:49.530 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:49.530 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:49.530 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:49.530 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:49.530 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:49.530 00:15:49.530 00:15:49.530 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.530 http://cunit.sourceforge.net/ 00:15:49.530 00:15:49.530 00:15:49.530 Suite: bdevio tests on: nvme3n1 00:15:49.530 Test: blockdev write read block ...passed 00:15:49.530 Test: blockdev write zeroes read block ...passed 00:15:49.530 Test: blockdev write zeroes read no split ...passed 00:15:49.530 Test: blockdev write zeroes read split ...passed 00:15:49.530 Test: blockdev write zeroes read split partial ...passed 00:15:49.530 Test: blockdev reset ...passed 00:15:49.530 Test: blockdev write read 8 blocks ...passed 00:15:49.530 Test: blockdev write read size > 128k ...passed 00:15:49.530 Test: blockdev write read invalid size ...passed 00:15:49.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.530 Test: blockdev write read max offset ...passed 00:15:49.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.530 Test: blockdev writev readv 8 blocks ...passed 00:15:49.530 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.530 Test: blockdev writev readv block ...passed 00:15:49.530 Test: blockdev writev readv size > 128k ...passed 00:15:49.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.530 Test: blockdev comparev and writev ...passed 00:15:49.530 Test: blockdev nvme passthru rw ...passed 00:15:49.530 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.530 Test: blockdev nvme admin passthru ...passed 00:15:49.530 Test: blockdev copy ...passed 00:15:49.530 Suite: bdevio tests on: nvme2n1 00:15:49.530 Test: blockdev write read block ...passed 00:15:49.530 Test: blockdev write zeroes read block ...passed 00:15:49.530 Test: blockdev write zeroes read no split ...passed 00:15:49.530 Test: blockdev write zeroes read split ...passed 00:15:49.530 Test: blockdev write zeroes read split partial ...passed 00:15:49.530 Test: blockdev reset ...passed 00:15:49.530 Test: blockdev write read 8 blocks ...passed 00:15:49.530 Test: blockdev write read size > 128k ...passed 00:15:49.530 Test: blockdev write read invalid size ...passed 00:15:49.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.530 Test: blockdev write read max offset ...passed 00:15:49.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.530 Test: blockdev writev readv 8 blocks ...passed 00:15:49.530 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.530 Test: blockdev writev readv block ...passed 00:15:49.530 Test: blockdev writev readv size > 128k ...passed 00:15:49.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.530 Test: blockdev comparev and writev ...passed 00:15:49.530 Test: blockdev nvme passthru rw ...passed 00:15:49.530 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.530 Test: blockdev nvme admin passthru ...passed 00:15:49.530 Test: blockdev copy ...passed 00:15:49.530 Suite: bdevio tests on: nvme1n1 00:15:49.530 Test: blockdev write read block ...passed 00:15:49.530 Test: blockdev write zeroes read block ...passed 00:15:49.530 Test: blockdev write zeroes read no split ...passed 00:15:49.530 Test: blockdev write zeroes read split ...passed 00:15:49.530 Test: blockdev write zeroes read split partial ...passed 00:15:49.530 Test: blockdev reset ...passed 00:15:49.530 Test: blockdev write read 8 blocks ...passed 00:15:49.530 Test: blockdev write read size > 128k ...passed 00:15:49.530 Test: blockdev write read invalid size ...passed 00:15:49.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.530 Test: blockdev write read max offset ...passed 00:15:49.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.530 Test: blockdev writev readv 8 blocks ...passed 00:15:49.530 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.530 Test: blockdev writev readv block ...passed 00:15:49.530 Test: blockdev writev readv size > 128k ...passed 00:15:49.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.530 Test: blockdev comparev and writev ...passed 00:15:49.530 Test: blockdev nvme passthru rw ...passed 00:15:49.530 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.530 Test: blockdev nvme admin passthru ...passed 00:15:49.530 Test: blockdev copy ...passed 00:15:49.530 Suite: bdevio tests on: nvme0n3 00:15:49.530 Test: blockdev write read block ...passed 00:15:49.530 Test: blockdev write zeroes read block ...passed 00:15:49.530 Test: blockdev write zeroes read no split ...passed 00:15:49.530 Test: blockdev write zeroes read split ...passed 00:15:49.530 Test: blockdev write zeroes read split partial ...passed 00:15:49.530 Test: blockdev reset ...passed 00:15:49.530 Test: blockdev write read 8 blocks ...passed 00:15:49.530 Test: blockdev write read size > 128k ...passed 00:15:49.530 Test: blockdev write read invalid size ...passed 00:15:49.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.530 Test: blockdev write read max offset ...passed 00:15:49.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.530 Test: blockdev writev readv 8 blocks ...passed 00:15:49.530 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.531 Test: blockdev writev readv block ...passed 00:15:49.531 Test: blockdev writev readv size > 128k ...passed 00:15:49.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.531 Test: blockdev comparev and writev ...passed 00:15:49.531 Test: blockdev nvme passthru rw ...passed 00:15:49.531 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.531 Test: blockdev nvme admin passthru ...passed 00:15:49.531 Test: blockdev copy ...passed 00:15:49.531 Suite: bdevio tests on: nvme0n2 00:15:49.531 Test: blockdev write read block ...passed 00:15:49.531 Test: blockdev write zeroes read block ...passed 00:15:49.531 Test: blockdev write zeroes read no split ...passed 00:15:49.531 Test: blockdev write zeroes read split ...passed 00:15:49.531 Test: blockdev write zeroes read split partial ...passed 00:15:49.531 Test: blockdev reset ...passed 00:15:49.531 Test: blockdev write read 8 blocks ...passed 00:15:49.531 Test: blockdev write read size > 128k ...passed 00:15:49.531 Test: blockdev write read invalid size ...passed 00:15:49.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.531 Test: blockdev write read max offset ...passed 00:15:49.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.531 Test: blockdev writev readv 8 blocks ...passed 00:15:49.531 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.531 Test: blockdev writev readv block ...passed 00:15:49.531 Test: blockdev writev readv size > 128k ...passed 00:15:49.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.531 Test: blockdev comparev and writev ...passed 00:15:49.531 Test: blockdev nvme passthru rw ...passed 00:15:49.531 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.531 Test: blockdev nvme admin passthru ...passed 00:15:49.531 Test: blockdev copy ...passed 00:15:49.531 Suite: bdevio tests on: nvme0n1 00:15:49.531 Test: blockdev write read block ...passed 00:15:49.531 Test: blockdev write zeroes read block ...passed 00:15:49.531 Test: blockdev write zeroes read no split ...passed 00:15:49.531 Test: blockdev write zeroes read split ...passed 00:15:49.792 Test: blockdev write zeroes read split partial ...passed 00:15:49.792 Test: blockdev reset ...passed 00:15:49.792 Test: blockdev write read 8 blocks ...passed 00:15:49.792 Test: blockdev write read size > 128k ...passed 00:15:49.792 Test: blockdev write read invalid size ...passed 00:15:49.792 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.792 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.792 Test: blockdev write read max offset ...passed 00:15:49.792 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.792 Test: blockdev writev readv 8 blocks ...passed 00:15:49.792 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.792 Test: blockdev writev readv block ...passed 00:15:49.792 Test: blockdev writev readv size > 128k ...passed 00:15:49.792 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.792 Test: blockdev comparev and writev ...passed 00:15:49.792 Test: blockdev nvme passthru rw ...passed 00:15:49.792 Test: blockdev nvme passthru vendor specific ...passed 00:15:49.792 Test: blockdev nvme admin passthru ...passed 00:15:49.792 Test: blockdev copy ...passed 00:15:49.792 00:15:49.792 Run Summary: Type Total Ran Passed Failed Inactive 00:15:49.792 suites 6 6 n/a 0 0 00:15:49.792 tests 138 138 138 0 0 00:15:49.792 asserts 780 780 780 0 n/a 00:15:49.792 00:15:49.792 Elapsed time = 0.611 seconds 00:15:49.792 0 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85242 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85242 ']' 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85242 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85242 00:15:49.792 killing process with pid 85242 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85242' 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85242 00:15:49.792 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85242 00:15:50.054 ************************************ 00:15:50.054 END TEST bdev_bounds 00:15:50.054 ************************************ 00:15:50.054 03:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:50.054 00:15:50.054 real 0m1.536s 00:15:50.054 user 0m3.713s 00:15:50.054 sys 0m0.344s 00:15:50.054 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.054 03:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 03:24:37 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:50.054 03:24:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:50.054 03:24:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.054 03:24:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 ************************************ 00:15:50.054 START TEST bdev_nbd 00:15:50.054 ************************************ 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:50.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85289 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85289 /var/tmp/spdk-nbd.sock 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85289 ']' 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.054 03:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 [2024-11-21 03:24:37.497593] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:50.054 [2024-11-21 03:24:37.497929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.316 [2024-11-21 03:24:37.634816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.316 [2024-11-21 03:24:37.664616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.316 [2024-11-21 03:24:37.695093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.889 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.151 1+0 records in 00:15:51.151 1+0 records out 00:15:51.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111588 s, 3.7 MB/s 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.151 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.412 1+0 records in 00:15:51.412 1+0 records out 00:15:51.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832524 s, 4.9 MB/s 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.412 03:24:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.673 1+0 records in 00:15:51.673 1+0 records out 00:15:51.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119623 s, 3.4 MB/s 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.673 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.934 1+0 records in 00:15:51.934 1+0 records out 00:15:51.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115938 s, 3.5 MB/s 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.934 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.196 1+0 records in 00:15:52.196 1+0 records out 00:15:52.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113977 s, 3.6 MB/s 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:52.196 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.457 1+0 records in 00:15:52.457 1+0 records out 00:15:52.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00189567 s, 2.2 MB/s 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:52.457 03:24:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:52.718 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:52.718 { 00:15:52.718 "nbd_device": "/dev/nbd0", 00:15:52.718 "bdev_name": "nvme0n1" 00:15:52.718 }, 00:15:52.718 { 00:15:52.718 "nbd_device": "/dev/nbd1", 00:15:52.718 "bdev_name": "nvme0n2" 00:15:52.718 }, 00:15:52.718 { 00:15:52.718 "nbd_device": "/dev/nbd2", 00:15:52.719 "bdev_name": "nvme0n3" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd3", 00:15:52.719 "bdev_name": "nvme1n1" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd4", 00:15:52.719 "bdev_name": "nvme2n1" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd5", 00:15:52.719 "bdev_name": "nvme3n1" 00:15:52.719 } 00:15:52.719 ]' 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd0", 00:15:52.719 "bdev_name": "nvme0n1" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd1", 00:15:52.719 "bdev_name": "nvme0n2" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd2", 00:15:52.719 "bdev_name": "nvme0n3" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd3", 00:15:52.719 "bdev_name": "nvme1n1" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd4", 00:15:52.719 "bdev_name": "nvme2n1" 00:15:52.719 }, 00:15:52.719 { 00:15:52.719 "nbd_device": "/dev/nbd5", 00:15:52.719 "bdev_name": "nvme3n1" 00:15:52.719 } 00:15:52.719 ]' 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.719 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.979 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.240 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.502 03:24:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.763 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.025 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:54.302 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.303 03:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:54.563 /dev/nbd0 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.563 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.564 1+0 records in 00:15:54.564 1+0 records out 00:15:54.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107502 s, 3.8 MB/s 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.564 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:54.825 /dev/nbd1 00:15:54.825 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:54.825 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.826 1+0 records in 00:15:54.826 1+0 records out 00:15:54.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011691 s, 3.5 MB/s 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.826 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:55.087 /dev/nbd10 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.087 1+0 records in 00:15:55.087 1+0 records out 00:15:55.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012235 s, 3.3 MB/s 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:55.087 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:55.348 /dev/nbd11 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.348 1+0 records in 00:15:55.348 1+0 records out 00:15:55.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118469 s, 3.5 MB/s 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:55.348 03:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:55.610 /dev/nbd12 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.610 1+0 records in 00:15:55.610 1+0 records out 00:15:55.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991288 s, 4.1 MB/s 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:55.610 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:55.871 /dev/nbd13 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.871 1+0 records in 00:15:55.871 1+0 records out 00:15:55.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133979 s, 3.1 MB/s 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.871 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.872 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd0", 00:15:56.133 "bdev_name": "nvme0n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd1", 00:15:56.133 "bdev_name": "nvme0n2" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd10", 00:15:56.133 "bdev_name": "nvme0n3" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd11", 00:15:56.133 "bdev_name": "nvme1n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd12", 00:15:56.133 "bdev_name": "nvme2n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd13", 00:15:56.133 "bdev_name": "nvme3n1" 00:15:56.133 } 00:15:56.133 ]' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd0", 00:15:56.133 "bdev_name": "nvme0n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd1", 00:15:56.133 "bdev_name": "nvme0n2" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd10", 00:15:56.133 "bdev_name": "nvme0n3" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd11", 00:15:56.133 "bdev_name": "nvme1n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd12", 00:15:56.133 "bdev_name": "nvme2n1" 00:15:56.133 }, 00:15:56.133 { 00:15:56.133 "nbd_device": "/dev/nbd13", 00:15:56.133 "bdev_name": "nvme3n1" 00:15:56.133 } 00:15:56.133 ]' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:56.133 /dev/nbd1 00:15:56.133 /dev/nbd10 00:15:56.133 /dev/nbd11 00:15:56.133 /dev/nbd12 00:15:56.133 /dev/nbd13' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:56.133 /dev/nbd1 00:15:56.133 /dev/nbd10 00:15:56.133 /dev/nbd11 00:15:56.133 /dev/nbd12 00:15:56.133 /dev/nbd13' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:56.133 256+0 records in 00:15:56.133 256+0 records out 00:15:56.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102553 s, 102 MB/s 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.133 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:56.394 256+0 records in 00:15:56.394 256+0 records out 00:15:56.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240403 s, 4.4 MB/s 00:15:56.394 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.394 03:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:56.655 256+0 records in 00:15:56.655 256+0 records out 00:15:56.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203 s, 5.2 MB/s 00:15:56.655 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.655 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:56.916 256+0 records in 00:15:56.916 256+0 records out 00:15:56.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243945 s, 4.3 MB/s 00:15:56.916 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.917 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:57.178 256+0 records in 00:15:57.178 256+0 records out 00:15:57.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204682 s, 5.1 MB/s 00:15:57.178 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:57.178 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:57.178 256+0 records in 00:15:57.178 256+0 records out 00:15:57.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239489 s, 4.4 MB/s 00:15:57.178 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:57.178 03:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:57.751 256+0 records in 00:15:57.751 256+0 records out 00:15:57.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.277089 s, 3.8 MB/s 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.751 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.012 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.305 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.599 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.600 03:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.862 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:59.123 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:59.124 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.124 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:59.124 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:59.384 malloc_lvol_verify 00:15:59.384 03:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:59.645 d4a3f4ea-3814-4912-bf64-4a4dd3871dea 00:15:59.645 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:59.906 1e064f00-3ca9-462e-bcd6-fda30de1a57d 00:15:59.906 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:00.167 /dev/nbd0 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:00.167 mke2fs 1.47.0 (5-Feb-2023) 00:16:00.167 Discarding device blocks: 0/4096 done 00:16:00.167 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:00.167 00:16:00.167 Allocating group tables: 0/1 done 00:16:00.167 Writing inode tables: 0/1 done 00:16:00.167 Creating journal (1024 blocks): done 00:16:00.167 Writing superblocks and filesystem accounting information: 0/1 done 00:16:00.167 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.167 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85289 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85289 ']' 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85289 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85289 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.428 killing process with pid 85289 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85289' 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85289 00:16:00.428 03:24:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85289 00:16:00.689 03:24:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:00.689 00:16:00.689 real 0m10.611s 00:16:00.689 user 0m14.282s 00:16:00.689 sys 0m3.893s 00:16:00.689 03:24:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.689 ************************************ 00:16:00.689 END TEST bdev_nbd 00:16:00.689 ************************************ 00:16:00.689 03:24:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:00.689 03:24:48 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:00.689 03:24:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:00.689 03:24:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:00.689 03:24:48 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:00.689 03:24:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.689 03:24:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.689 03:24:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.689 ************************************ 00:16:00.689 START TEST bdev_fio 00:16:00.689 ************************************ 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:00.689 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:00.689 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:00.690 ************************************ 00:16:00.690 START TEST bdev_fio_rw_verify 00:16:00.690 ************************************ 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:00.690 03:24:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.951 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.951 fio-3.35 00:16:00.951 Starting 6 threads 00:16:13.192 00:16:13.192 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85697: Thu Nov 21 03:24:58 2024 00:16:13.192 read: IOPS=13.3k, BW=51.8MiB/s (54.3MB/s)(518MiB/10004msec) 00:16:13.192 slat (usec): min=2, max=2073, avg= 6.88, stdev=17.01 00:16:13.192 clat (usec): min=91, max=16547, avg=1472.32, stdev=787.92 00:16:13.192 lat (usec): min=95, max=16563, avg=1479.21, stdev=788.44 00:16:13.192 clat percentiles (usec): 00:16:13.192 | 50.000th=[ 1369], 99.000th=[ 3851], 99.900th=[ 5342], 99.990th=[ 9241], 00:16:13.192 | 99.999th=[16581] 00:16:13.192 write: IOPS=13.6k, BW=53.0MiB/s (55.5MB/s)(530MiB/10004msec); 0 zone resets 00:16:13.192 slat (usec): min=12, max=4416, avg=43.47, stdev=150.81 00:16:13.192 clat (usec): min=89, max=8017, avg=1753.48, stdev=851.83 00:16:13.192 lat (usec): min=104, max=8067, avg=1796.95, stdev=865.29 00:16:13.192 clat percentiles (usec): 00:16:13.192 | 50.000th=[ 1614], 99.000th=[ 4359], 99.900th=[ 5932], 99.990th=[ 7111], 00:16:13.192 | 99.999th=[ 7963] 00:16:13.192 bw ( KiB/s): min=48555, max=64595, per=99.93%, avg=54207.37, stdev=812.99, samples=114 00:16:13.192 iops : min=12135, max=16148, avg=13550.89, stdev=203.28, samples=114 00:16:13.192 lat (usec) : 100=0.01%, 250=0.95%, 500=4.66%, 750=7.12%, 1000=10.22% 00:16:13.192 lat (msec) : 2=50.36%, 4=25.41%, 10=1.28%, 20=0.01% 00:16:13.192 cpu : usr=44.10%, sys=31.54%, ctx=5327, majf=0, minf=13928 00:16:13.192 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.192 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.192 issued rwts: total=132728,135666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:13.192 00:16:13.192 Run status group 0 (all jobs): 00:16:13.192 READ: bw=51.8MiB/s (54.3MB/s), 51.8MiB/s-51.8MiB/s (54.3MB/s-54.3MB/s), io=518MiB (544MB), run=10004-10004msec 00:16:13.192 WRITE: bw=53.0MiB/s (55.5MB/s), 53.0MiB/s-53.0MiB/s (55.5MB/s-55.5MB/s), io=530MiB (556MB), run=10004-10004msec 00:16:13.192 ----------------------------------------------------- 00:16:13.192 Suppressions used: 00:16:13.192 count bytes template 00:16:13.192 6 48 /usr/src/fio/parse.c 00:16:13.192 2843 272928 /usr/src/fio/iolog.c 00:16:13.192 1 8 libtcmalloc_minimal.so 00:16:13.192 1 904 libcrypto.so 00:16:13.192 ----------------------------------------------------- 00:16:13.192 00:16:13.192 00:16:13.192 real 0m11.238s 00:16:13.192 user 0m27.228s 00:16:13.192 sys 0m19.265s 00:16:13.192 03:24:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.192 03:24:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:13.192 ************************************ 00:16:13.192 END TEST bdev_fio_rw_verify 00:16:13.192 ************************************ 00:16:13.192 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:13.192 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:13.192 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "259d7aac-dc51-4485-b3a1-b9d822704ed9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "259d7aac-dc51-4485-b3a1-b9d822704ed9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c5b93bdf-87e6-430c-9fb1-8881ca16986d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5b93bdf-87e6-430c-9fb1-8881ca16986d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a5182356-ee53-4bc1-b1d1-1e1f7b2566cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5182356-ee53-4bc1-b1d1-1e1f7b2566cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "727e3318-d855-4893-8849-519b60dfcfb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "727e3318-d855-4893-8849-519b60dfcfb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "84d7e84a-d408-4118-aab1-37fdc5816cab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "84d7e84a-d408-4118-aab1-37fdc5816cab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a94f48b4-329c-4538-a4f7-4a6907e0b325"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a94f48b4-329c-4538-a4f7-4a6907e0b325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:13.193 /home/vagrant/spdk_repo/spdk 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:13.193 00:16:13.193 real 0m11.423s 00:16:13.193 user 0m27.308s 00:16:13.193 sys 0m19.342s 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.193 03:24:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 ************************************ 00:16:13.193 END TEST bdev_fio 00:16:13.193 ************************************ 00:16:13.193 03:24:59 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:13.193 03:24:59 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:13.193 03:24:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:13.193 03:24:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.193 03:24:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 ************************************ 00:16:13.193 START TEST bdev_verify 00:16:13.193 ************************************ 00:16:13.193 03:24:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:13.193 [2024-11-21 03:24:59.670508] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:13.193 [2024-11-21 03:24:59.670642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85861 ] 00:16:13.193 [2024-11-21 03:24:59.807663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:13.193 [2024-11-21 03:24:59.836993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.193 [2024-11-21 03:24:59.870418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.193 [2024-11-21 03:24:59.870501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.193 Running I/O for 5 seconds... 00:16:15.154 23840.00 IOPS, 93.12 MiB/s [2024-11-21T03:25:03.666Z] 23424.00 IOPS, 91.50 MiB/s [2024-11-21T03:25:04.609Z] 23584.00 IOPS, 92.13 MiB/s [2024-11-21T03:25:05.552Z] 23639.75 IOPS, 92.34 MiB/s [2024-11-21T03:25:05.552Z] 23027.20 IOPS, 89.95 MiB/s 00:16:17.987 Latency(us) 00:16:17.987 [2024-11-21T03:25:05.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.987 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0x80000 00:16:17.987 nvme0n1 : 5.03 1805.76 7.05 0.00 0.00 70742.99 4814.38 75820.11 00:16:17.987 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x80000 length 0x80000 00:16:17.987 nvme0n1 : 5.04 1853.79 7.24 0.00 0.00 68909.39 6956.90 87515.77 00:16:17.987 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0x80000 00:16:17.987 nvme0n2 : 5.03 1779.79 6.95 0.00 0.00 71630.23 5595.77 68964.04 00:16:17.987 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x80000 length 0x80000 00:16:17.987 nvme0n2 : 5.06 1822.34 7.12 0.00 0.00 69962.29 7662.67 76223.41 00:16:17.987 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0x80000 00:16:17.987 nvme0n3 : 5.05 1773.36 6.93 0.00 0.00 71740.91 9074.22 66544.25 00:16:17.987 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x80000 length 0x80000 00:16:17.987 nvme0n3 : 5.04 1802.37 7.04 0.00 0.00 70589.34 12149.37 76626.71 00:16:17.987 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0xa0000 00:16:17.987 nvme1n1 : 5.08 1789.31 6.99 0.00 0.00 70955.01 7561.85 66544.25 00:16:17.987 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0xa0000 length 0xa0000 00:16:17.987 nvme1n1 : 5.08 1814.83 7.09 0.00 0.00 69964.21 7864.32 75820.11 00:16:17.987 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0x20000 00:16:17.987 nvme2n1 : 5.08 1788.40 6.99 0.00 0.00 70847.05 9074.22 64527.75 00:16:17.987 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x20000 length 0x20000 00:16:17.987 nvme2n1 : 5.09 1812.34 7.08 0.00 0.00 69927.89 7914.73 86709.17 00:16:17.987 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0x0 length 0xbd0bd 00:16:17.987 nvme3n1 : 5.08 2297.36 8.97 0.00 0.00 54925.74 4713.55 64527.75 00:16:17.987 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.987 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:17.987 nvme3n1 : 5.08 2400.21 9.38 0.00 0.00 52656.48 5116.85 69367.34 00:16:17.987 [2024-11-21T03:25:05.552Z] =================================================================================================================== 00:16:17.987 [2024-11-21T03:25:05.552Z] Total : 22739.86 88.83 0.00 0.00 67043.83 4713.55 87515.77 00:16:17.987 00:16:17.987 real 0m5.875s 00:16:17.987 user 0m9.372s 00:16:17.987 sys 0m1.466s 00:16:17.987 ************************************ 00:16:17.987 END TEST bdev_verify 00:16:17.987 ************************************ 00:16:17.987 03:25:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.987 03:25:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 03:25:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:17.987 03:25:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:17.987 03:25:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.987 03:25:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 ************************************ 00:16:17.987 START TEST bdev_verify_big_io 00:16:17.987 ************************************ 00:16:17.987 03:25:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:18.249 [2024-11-21 03:25:05.619732] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:18.249 [2024-11-21 03:25:05.619874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85954 ] 00:16:18.249 [2024-11-21 03:25:05.757629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:18.249 [2024-11-21 03:25:05.788237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:18.510 [2024-11-21 03:25:05.819120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.510 [2024-11-21 03:25:05.819178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.771 Running I/O for 5 seconds... 00:16:24.614 672.00 IOPS, 42.00 MiB/s [2024-11-21T03:25:12.441Z] 2673.00 IOPS, 167.06 MiB/s [2024-11-21T03:25:12.441Z] 3002.00 IOPS, 187.62 MiB/s 00:16:24.876 Latency(us) 00:16:24.876 [2024-11-21T03:25:12.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.876 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0x8000 00:16:24.876 nvme0n1 : 6.05 123.02 7.69 0.00 0.00 1003066.46 11645.24 1109877.37 00:16:24.876 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x8000 length 0x8000 00:16:24.876 nvme0n1 : 5.97 105.83 6.61 0.00 0.00 1158634.24 9074.22 1400252.26 00:16:24.876 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0x8000 00:16:24.876 nvme0n2 : 5.90 119.27 7.45 0.00 0.00 992412.68 146800.64 929199.66 00:16:24.876 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x8000 length 0x8000 00:16:24.876 nvme0n2 : 6.05 118.96 7.43 0.00 0.00 1006098.45 30650.68 1000180.18 00:16:24.876 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0x8000 00:16:24.876 nvme0n3 : 5.91 127.35 7.96 0.00 0.00 894564.25 77030.01 1290555.08 00:16:24.876 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x8000 length 0x8000 00:16:24.876 nvme0n3 : 5.97 125.86 7.87 0.00 0.00 913452.24 52025.50 1116330.14 00:16:24.876 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0xa000 00:16:24.876 nvme1n1 : 5.96 102.67 6.42 0.00 0.00 1090807.02 53638.70 2865032.27 00:16:24.876 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0xa000 length 0xa000 00:16:24.876 nvme1n1 : 6.06 124.15 7.76 0.00 0.00 901338.01 63721.16 1219574.55 00:16:24.876 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0x2000 00:16:24.876 nvme2n1 : 6.05 121.73 7.61 0.00 0.00 897204.00 54445.29 1619646.62 00:16:24.876 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x2000 length 0x2000 00:16:24.876 nvme2n1 : 6.07 123.84 7.74 0.00 0.00 876861.49 81869.59 1600288.30 00:16:24.876 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0x0 length 0xbd0b 00:16:24.876 nvme3n1 : 6.07 184.16 11.51 0.00 0.00 577484.30 6074.68 1025991.29 00:16:24.876 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.876 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:24.876 nvme3n1 : 6.08 155.17 9.70 0.00 0.00 678366.95 2797.88 1871304.86 00:16:24.876 [2024-11-21T03:25:12.441Z] =================================================================================================================== 00:16:24.876 [2024-11-21T03:25:12.441Z] Total : 1532.03 95.75 0.00 0.00 890937.02 2797.88 2865032.27 00:16:25.139 00:16:25.139 real 0m6.896s 00:16:25.139 user 0m12.594s 00:16:25.139 sys 0m0.477s 00:16:25.139 03:25:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.139 ************************************ 00:16:25.139 END TEST bdev_verify_big_io 00:16:25.139 ************************************ 00:16:25.139 03:25:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.139 03:25:12 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.139 03:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:25.139 03:25:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.139 03:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.139 ************************************ 00:16:25.139 START TEST bdev_write_zeroes 00:16:25.139 ************************************ 00:16:25.139 03:25:12 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.139 [2024-11-21 03:25:12.578166] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:25.139 [2024-11-21 03:25:12.578306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:16:25.401 [2024-11-21 03:25:12.714145] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:25.401 [2024-11-21 03:25:12.744793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.401 [2024-11-21 03:25:12.774342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.663 Running I/O for 1 seconds... 00:16:26.608 72096.00 IOPS, 281.62 MiB/s 00:16:26.608 Latency(us) 00:16:26.608 [2024-11-21T03:25:14.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.608 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme0n1 : 1.02 11678.37 45.62 0.00 0.00 10950.28 7208.96 20568.22 00:16:26.608 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme0n2 : 1.02 11665.26 45.57 0.00 0.00 10953.23 7208.96 20669.05 00:16:26.608 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme0n3 : 1.02 11720.99 45.79 0.00 0.00 10891.29 7108.14 19156.68 00:16:26.608 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme1n1 : 1.02 11706.61 45.73 0.00 0.00 10895.71 7208.96 19358.33 00:16:26.608 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme2n1 : 1.02 11693.39 45.68 0.00 0.00 10895.94 7208.96 19660.80 00:16:26.608 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:26.608 nvme3n1 : 1.02 13396.23 52.33 0.00 0.00 9501.31 4713.55 17745.13 00:16:26.608 [2024-11-21T03:25:14.173Z] =================================================================================================================== 00:16:26.608 [2024-11-21T03:25:14.173Z] Total : 71860.86 280.71 0.00 0.00 10651.91 4713.55 20669.05 00:16:26.869 00:16:26.869 real 0m1.764s 00:16:26.869 user 0m1.047s 00:16:26.869 sys 0m0.515s 00:16:26.869 03:25:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.869 ************************************ 00:16:26.869 END TEST bdev_write_zeroes 00:16:26.869 ************************************ 00:16:26.869 03:25:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:26.869 03:25:14 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.869 03:25:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:26.869 03:25:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.869 03:25:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.869 ************************************ 00:16:26.869 START TEST bdev_json_nonenclosed 00:16:26.869 ************************************ 00:16:26.869 03:25:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.869 [2024-11-21 03:25:14.417249] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:26.869 [2024-11-21 03:25:14.417395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86089 ] 00:16:27.130 [2024-11-21 03:25:14.553056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.130 [2024-11-21 03:25:14.579925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.130 [2024-11-21 03:25:14.609795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.130 [2024-11-21 03:25:14.609919] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:27.130 [2024-11-21 03:25:14.609942] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:27.130 [2024-11-21 03:25:14.609955] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.391 00:16:27.391 real 0m0.350s 00:16:27.391 user 0m0.131s 00:16:27.391 sys 0m0.115s 00:16:27.391 03:25:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.391 03:25:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:27.391 ************************************ 00:16:27.391 END TEST bdev_json_nonenclosed 00:16:27.391 ************************************ 00:16:27.391 03:25:14 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:27.391 03:25:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:27.391 03:25:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.391 03:25:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.391 ************************************ 00:16:27.391 START TEST bdev_json_nonarray 00:16:27.391 ************************************ 00:16:27.391 03:25:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:27.391 [2024-11-21 03:25:14.837104] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:27.391 [2024-11-21 03:25:14.837251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86116 ] 00:16:27.652 [2024-11-21 03:25:14.973134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.652 [2024-11-21 03:25:15.004085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.652 [2024-11-21 03:25:15.033758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.652 [2024-11-21 03:25:15.033880] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:27.652 [2024-11-21 03:25:15.033915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:27.652 [2024-11-21 03:25:15.033931] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.652 00:16:27.652 real 0m0.355s 00:16:27.652 user 0m0.141s 00:16:27.652 sys 0m0.110s 00:16:27.652 03:25:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.652 ************************************ 00:16:27.652 END TEST bdev_json_nonarray 00:16:27.652 ************************************ 00:16:27.652 03:25:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:27.652 03:25:15 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:28.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:40.455 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:40.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:40.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:40.455 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:40.455 00:16:40.455 real 0m54.739s 00:16:40.455 user 1m12.527s 00:16:40.455 sys 0m48.594s 00:16:40.455 03:25:27 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.455 ************************************ 00:16:40.455 END TEST blockdev_xnvme 00:16:40.455 ************************************ 00:16:40.455 03:25:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.455 03:25:27 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:40.455 03:25:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.455 03:25:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.455 03:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.455 ************************************ 00:16:40.455 START TEST ublk 00:16:40.455 ************************************ 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:40.455 * Looking for test storage... 00:16:40.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.455 03:25:27 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.455 03:25:27 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.455 03:25:27 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.455 03:25:27 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.455 03:25:27 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.455 03:25:27 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:40.455 03:25:27 ublk -- scripts/common.sh@345 -- # : 1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.455 03:25:27 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.455 03:25:27 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@353 -- # local d=1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.455 03:25:27 ublk -- scripts/common.sh@355 -- # echo 1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.455 03:25:27 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@353 -- # local d=2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.455 03:25:27 ublk -- scripts/common.sh@355 -- # echo 2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.455 03:25:27 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.455 03:25:27 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.455 03:25:27 ublk -- scripts/common.sh@368 -- # return 0 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.455 --rc genhtml_branch_coverage=1 00:16:40.455 --rc genhtml_function_coverage=1 00:16:40.455 --rc genhtml_legend=1 00:16:40.455 --rc geninfo_all_blocks=1 00:16:40.455 --rc geninfo_unexecuted_blocks=1 00:16:40.455 00:16:40.455 ' 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.455 --rc genhtml_branch_coverage=1 00:16:40.455 --rc genhtml_function_coverage=1 00:16:40.455 --rc genhtml_legend=1 00:16:40.455 --rc geninfo_all_blocks=1 00:16:40.455 --rc geninfo_unexecuted_blocks=1 00:16:40.455 00:16:40.455 ' 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.455 --rc genhtml_branch_coverage=1 00:16:40.455 --rc genhtml_function_coverage=1 00:16:40.455 --rc genhtml_legend=1 00:16:40.455 --rc geninfo_all_blocks=1 00:16:40.455 --rc geninfo_unexecuted_blocks=1 00:16:40.455 00:16:40.455 ' 00:16:40.455 03:25:27 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.455 --rc genhtml_branch_coverage=1 00:16:40.455 --rc genhtml_function_coverage=1 00:16:40.455 --rc genhtml_legend=1 00:16:40.455 --rc geninfo_all_blocks=1 00:16:40.455 --rc geninfo_unexecuted_blocks=1 00:16:40.455 00:16:40.455 ' 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:40.455 03:25:27 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:40.455 03:25:27 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:40.455 03:25:27 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:40.455 03:25:27 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:40.455 03:25:27 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:40.455 03:25:27 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:40.455 03:25:27 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:40.455 03:25:27 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:40.455 03:25:27 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:40.456 03:25:27 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:40.456 03:25:27 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:40.456 03:25:27 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:40.456 03:25:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.456 03:25:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.456 03:25:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.456 ************************************ 00:16:40.456 START TEST test_save_ublk_config 00:16:40.456 ************************************ 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86410 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:40.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86410 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86410 ']' 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:40.456 03:25:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:40.456 [2024-11-21 03:25:27.470579] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:40.456 [2024-11-21 03:25:27.470734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86410 ] 00:16:40.456 [2024-11-21 03:25:27.609107] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:40.456 [2024-11-21 03:25:27.638067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.456 [2024-11-21 03:25:27.679667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.716 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:40.716 [2024-11-21 03:25:28.268919] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:40.716 [2024-11-21 03:25:28.269550] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.977 malloc0 00:16:40.977 [2024-11-21 03:25:28.293019] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:40.977 [2024-11-21 03:25:28.293087] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:40.977 [2024-11-21 03:25:28.293103] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:40.977 [2024-11-21 03:25:28.293110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:40.977 [2024-11-21 03:25:28.301996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:40.977 [2024-11-21 03:25:28.302018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:40.977 [2024-11-21 03:25:28.308924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:40.977 [2024-11-21 03:25:28.309016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:40.977 [2024-11-21 03:25:28.324924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:40.977 0 00:16:40.977 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.977 03:25:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:40.977 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.977 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:41.237 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.237 03:25:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:41.237 "subsystems": [ 00:16:41.237 { 00:16:41.237 "subsystem": "fsdev", 00:16:41.237 "config": [ 00:16:41.237 { 00:16:41.237 "method": "fsdev_set_opts", 00:16:41.237 "params": { 00:16:41.237 "fsdev_io_pool_size": 65535, 00:16:41.237 "fsdev_io_cache_size": 256 00:16:41.237 } 00:16:41.237 } 00:16:41.237 ] 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "subsystem": "keyring", 00:16:41.237 "config": [] 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "subsystem": "iobuf", 00:16:41.237 "config": [ 00:16:41.237 { 00:16:41.237 "method": "iobuf_set_options", 00:16:41.237 "params": { 00:16:41.237 "small_pool_count": 8192, 00:16:41.237 "large_pool_count": 1024, 00:16:41.237 "small_bufsize": 8192, 00:16:41.237 "large_bufsize": 135168, 00:16:41.237 "enable_numa": false 00:16:41.237 } 00:16:41.237 } 00:16:41.237 ] 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "subsystem": "sock", 00:16:41.237 "config": [ 00:16:41.237 { 00:16:41.237 "method": "sock_set_default_impl", 00:16:41.237 "params": { 00:16:41.237 "impl_name": "posix" 00:16:41.237 } 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "method": "sock_impl_set_options", 00:16:41.237 "params": { 00:16:41.237 "impl_name": "ssl", 00:16:41.237 "recv_buf_size": 4096, 00:16:41.237 "send_buf_size": 4096, 00:16:41.237 "enable_recv_pipe": true, 00:16:41.238 "enable_quickack": false, 00:16:41.238 "enable_placement_id": 0, 00:16:41.238 "enable_zerocopy_send_server": true, 00:16:41.238 "enable_zerocopy_send_client": false, 00:16:41.238 "zerocopy_threshold": 0, 00:16:41.238 "tls_version": 0, 00:16:41.238 "enable_ktls": false 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "sock_impl_set_options", 00:16:41.238 "params": { 00:16:41.238 "impl_name": "posix", 00:16:41.238 "recv_buf_size": 2097152, 00:16:41.238 "send_buf_size": 2097152, 00:16:41.238 "enable_recv_pipe": true, 00:16:41.238 "enable_quickack": false, 00:16:41.238 "enable_placement_id": 0, 00:16:41.238 "enable_zerocopy_send_server": true, 00:16:41.238 "enable_zerocopy_send_client": false, 00:16:41.238 "zerocopy_threshold": 0, 00:16:41.238 "tls_version": 0, 00:16:41.238 "enable_ktls": false 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "vmd", 00:16:41.238 "config": [] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "accel", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "accel_set_options", 00:16:41.238 "params": { 00:16:41.238 "small_cache_size": 128, 00:16:41.238 "large_cache_size": 16, 00:16:41.238 "task_count": 2048, 00:16:41.238 "sequence_count": 2048, 00:16:41.238 "buf_count": 2048 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "bdev", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "bdev_set_options", 00:16:41.238 "params": { 00:16:41.238 "bdev_io_pool_size": 65535, 00:16:41.238 "bdev_io_cache_size": 256, 00:16:41.238 "bdev_auto_examine": true, 00:16:41.238 "iobuf_small_cache_size": 128, 00:16:41.238 "iobuf_large_cache_size": 16 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_raid_set_options", 00:16:41.238 "params": { 00:16:41.238 "process_window_size_kb": 1024, 00:16:41.238 "process_max_bandwidth_mb_sec": 0 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_iscsi_set_options", 00:16:41.238 "params": { 00:16:41.238 "timeout_sec": 30 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_nvme_set_options", 00:16:41.238 "params": { 00:16:41.238 "action_on_timeout": "none", 00:16:41.238 "timeout_us": 0, 00:16:41.238 "timeout_admin_us": 0, 00:16:41.238 "keep_alive_timeout_ms": 10000, 00:16:41.238 "arbitration_burst": 0, 00:16:41.238 "low_priority_weight": 0, 00:16:41.238 "medium_priority_weight": 0, 00:16:41.238 "high_priority_weight": 0, 00:16:41.238 "nvme_adminq_poll_period_us": 10000, 00:16:41.238 "nvme_ioq_poll_period_us": 0, 00:16:41.238 "io_queue_requests": 0, 00:16:41.238 "delay_cmd_submit": true, 00:16:41.238 "transport_retry_count": 4, 00:16:41.238 "bdev_retry_count": 3, 00:16:41.238 "transport_ack_timeout": 0, 00:16:41.238 "ctrlr_loss_timeout_sec": 0, 00:16:41.238 "reconnect_delay_sec": 0, 00:16:41.238 "fast_io_fail_timeout_sec": 0, 00:16:41.238 "disable_auto_failback": false, 00:16:41.238 "generate_uuids": false, 00:16:41.238 "transport_tos": 0, 00:16:41.238 "nvme_error_stat": false, 00:16:41.238 "rdma_srq_size": 0, 00:16:41.238 "io_path_stat": false, 00:16:41.238 "allow_accel_sequence": false, 00:16:41.238 "rdma_max_cq_size": 0, 00:16:41.238 "rdma_cm_event_timeout_ms": 0, 00:16:41.238 "dhchap_digests": [ 00:16:41.238 "sha256", 00:16:41.238 "sha384", 00:16:41.238 "sha512" 00:16:41.238 ], 00:16:41.238 "dhchap_dhgroups": [ 00:16:41.238 "null", 00:16:41.238 "ffdhe2048", 00:16:41.238 "ffdhe3072", 00:16:41.238 "ffdhe4096", 00:16:41.238 "ffdhe6144", 00:16:41.238 "ffdhe8192" 00:16:41.238 ] 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_nvme_set_hotplug", 00:16:41.238 "params": { 00:16:41.238 "period_us": 100000, 00:16:41.238 "enable": false 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_malloc_create", 00:16:41.238 "params": { 00:16:41.238 "name": "malloc0", 00:16:41.238 "num_blocks": 8192, 00:16:41.238 "block_size": 4096, 00:16:41.238 "physical_block_size": 4096, 00:16:41.238 "uuid": "60bccc34-7a1b-48cf-8407-fecfcb3474dc", 00:16:41.238 "optimal_io_boundary": 0, 00:16:41.238 "md_size": 0, 00:16:41.238 "dif_type": 0, 00:16:41.238 "dif_is_head_of_md": false, 00:16:41.238 "dif_pi_format": 0 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "bdev_wait_for_examine" 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "scsi", 00:16:41.238 "config": null 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "scheduler", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "framework_set_scheduler", 00:16:41.238 "params": { 00:16:41.238 "name": "static" 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "vhost_scsi", 00:16:41.238 "config": [] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "vhost_blk", 00:16:41.238 "config": [] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "ublk", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "ublk_create_target", 00:16:41.238 "params": { 00:16:41.238 "cpumask": "1" 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "ublk_start_disk", 00:16:41.238 "params": { 00:16:41.238 "bdev_name": "malloc0", 00:16:41.238 "ublk_id": 0, 00:16:41.238 "num_queues": 1, 00:16:41.238 "queue_depth": 128 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "nbd", 00:16:41.238 "config": [] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "nvmf", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "nvmf_set_config", 00:16:41.238 "params": { 00:16:41.238 "discovery_filter": "match_any", 00:16:41.238 "admin_cmd_passthru": { 00:16:41.238 "identify_ctrlr": false 00:16:41.238 }, 00:16:41.238 "dhchap_digests": [ 00:16:41.238 "sha256", 00:16:41.238 "sha384", 00:16:41.238 "sha512" 00:16:41.238 ], 00:16:41.238 "dhchap_dhgroups": [ 00:16:41.238 "null", 00:16:41.238 "ffdhe2048", 00:16:41.238 "ffdhe3072", 00:16:41.238 "ffdhe4096", 00:16:41.238 "ffdhe6144", 00:16:41.238 "ffdhe8192" 00:16:41.238 ] 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "nvmf_set_max_subsystems", 00:16:41.238 "params": { 00:16:41.238 "max_subsystems": 1024 00:16:41.238 } 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "method": "nvmf_set_crdt", 00:16:41.238 "params": { 00:16:41.238 "crdt1": 0, 00:16:41.238 "crdt2": 0, 00:16:41.238 "crdt3": 0 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "subsystem": "iscsi", 00:16:41.238 "config": [ 00:16:41.238 { 00:16:41.238 "method": "iscsi_set_options", 00:16:41.238 "params": { 00:16:41.238 "node_base": "iqn.2016-06.io.spdk", 00:16:41.238 "max_sessions": 128, 00:16:41.238 "max_connections_per_session": 2, 00:16:41.238 "max_queue_depth": 64, 00:16:41.238 "default_time2wait": 2, 00:16:41.238 "default_time2retain": 20, 00:16:41.238 "first_burst_length": 8192, 00:16:41.238 "immediate_data": true, 00:16:41.238 "allow_duplicated_isid": false, 00:16:41.238 "error_recovery_level": 0, 00:16:41.238 "nop_timeout": 60, 00:16:41.238 "nop_in_interval": 30, 00:16:41.238 "disable_chap": false, 00:16:41.238 "require_chap": false, 00:16:41.238 "mutual_chap": false, 00:16:41.238 "chap_group": 0, 00:16:41.238 "max_large_datain_per_connection": 64, 00:16:41.238 "max_r2t_per_connection": 4, 00:16:41.238 "pdu_pool_size": 36864, 00:16:41.238 "immediate_data_pool_size": 16384, 00:16:41.238 "data_out_pool_size": 2048 00:16:41.238 } 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }' 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86410 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86410 ']' 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86410 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86410 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.238 killing process with pid 86410 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86410' 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86410 00:16:41.238 03:25:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86410 00:16:41.499 [2024-11-21 03:25:28.949014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:41.499 [2024-11-21 03:25:28.996974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:41.499 [2024-11-21 03:25:28.997127] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:41.499 [2024-11-21 03:25:29.005951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:41.499 [2024-11-21 03:25:29.006016] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:41.499 [2024-11-21 03:25:29.006036] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:41.499 [2024-11-21 03:25:29.006071] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:41.499 [2024-11-21 03:25:29.006237] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86448 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86448 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86448 ']' 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.077 03:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:42.077 "subsystems": [ 00:16:42.077 { 00:16:42.077 "subsystem": "fsdev", 00:16:42.077 "config": [ 00:16:42.077 { 00:16:42.077 "method": "fsdev_set_opts", 00:16:42.077 "params": { 00:16:42.077 "fsdev_io_pool_size": 65535, 00:16:42.077 "fsdev_io_cache_size": 256 00:16:42.077 } 00:16:42.077 } 00:16:42.077 ] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "keyring", 00:16:42.077 "config": [] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "iobuf", 00:16:42.077 "config": [ 00:16:42.077 { 00:16:42.077 "method": "iobuf_set_options", 00:16:42.077 "params": { 00:16:42.077 "small_pool_count": 8192, 00:16:42.077 "large_pool_count": 1024, 00:16:42.077 "small_bufsize": 8192, 00:16:42.077 "large_bufsize": 135168, 00:16:42.077 "enable_numa": false 00:16:42.077 } 00:16:42.077 } 00:16:42.077 ] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "sock", 00:16:42.077 "config": [ 00:16:42.077 { 00:16:42.077 "method": "sock_set_default_impl", 00:16:42.077 "params": { 00:16:42.077 "impl_name": "posix" 00:16:42.077 } 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "method": "sock_impl_set_options", 00:16:42.077 "params": { 00:16:42.077 "impl_name": "ssl", 00:16:42.077 "recv_buf_size": 4096, 00:16:42.077 "send_buf_size": 4096, 00:16:42.077 "enable_recv_pipe": true, 00:16:42.077 "enable_quickack": false, 00:16:42.077 "enable_placement_id": 0, 00:16:42.077 "enable_zerocopy_send_server": true, 00:16:42.077 "enable_zerocopy_send_client": false, 00:16:42.077 "zerocopy_threshold": 0, 00:16:42.077 "tls_version": 0, 00:16:42.077 "enable_ktls": false 00:16:42.077 } 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "method": "sock_impl_set_options", 00:16:42.077 "params": { 00:16:42.077 "impl_name": "posix", 00:16:42.077 "recv_buf_size": 2097152, 00:16:42.077 "send_buf_size": 2097152, 00:16:42.077 "enable_recv_pipe": true, 00:16:42.077 "enable_quickack": false, 00:16:42.077 "enable_placement_id": 0, 00:16:42.077 "enable_zerocopy_send_server": true, 00:16:42.077 "enable_zerocopy_send_client": false, 00:16:42.077 "zerocopy_threshold": 0, 00:16:42.077 "tls_version": 0, 00:16:42.077 "enable_ktls": false 00:16:42.077 } 00:16:42.077 } 00:16:42.077 ] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "vmd", 00:16:42.077 "config": [] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "accel", 00:16:42.077 "config": [ 00:16:42.077 { 00:16:42.077 "method": "accel_set_options", 00:16:42.077 "params": { 00:16:42.077 "small_cache_size": 128, 00:16:42.077 "large_cache_size": 16, 00:16:42.077 "task_count": 2048, 00:16:42.077 "sequence_count": 2048, 00:16:42.077 "buf_count": 2048 00:16:42.077 } 00:16:42.077 } 00:16:42.077 ] 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "subsystem": "bdev", 00:16:42.077 "config": [ 00:16:42.077 { 00:16:42.077 "method": "bdev_set_options", 00:16:42.077 "params": { 00:16:42.077 "bdev_io_pool_size": 65535, 00:16:42.077 "bdev_io_cache_size": 256, 00:16:42.077 "bdev_auto_examine": true, 00:16:42.077 "iobuf_small_cache_size": 128, 00:16:42.077 "iobuf_large_cache_size": 16 00:16:42.077 } 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "method": "bdev_raid_set_options", 00:16:42.077 "params": { 00:16:42.077 "process_window_size_kb": 1024, 00:16:42.077 "process_max_bandwidth_mb_sec": 0 00:16:42.077 } 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "method": "bdev_iscsi_set_options", 00:16:42.077 "params": { 00:16:42.077 "timeout_sec": 30 00:16:42.077 } 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "method": "bdev_nvme_set_options", 00:16:42.077 "params": { 00:16:42.077 "action_on_timeout": "none", 00:16:42.077 "timeout_us": 0, 00:16:42.077 "timeout_admin_us": 0, 00:16:42.077 "keep_alive_timeout_ms": 10000, 00:16:42.077 "arbitration_burst": 0, 00:16:42.077 "low_priority_weight": 0, 00:16:42.077 "medium_priority_weight": 0, 00:16:42.077 "high_priority_weight": 0, 00:16:42.077 "nvme_adminq_poll_period_us": 10000, 00:16:42.077 "nvme_ioq_poll_period_us": 0, 00:16:42.077 "io_queue_requests": 0, 00:16:42.077 "delay_cmd_submit": true, 00:16:42.077 "transport_retry_count": 4, 00:16:42.077 "bdev_retry_count": 3, 00:16:42.077 "transport_ack_timeout": 0, 00:16:42.077 "ctrlr_loss_timeout_sec": 0, 00:16:42.077 "reconnect_delay_sec": 0, 00:16:42.077 "fast_io_fail_timeout_sec": 0, 00:16:42.077 "disable_auto_failback": false, 00:16:42.077 "generate_uuids": false, 00:16:42.077 "transport_tos": 0, 00:16:42.077 "nvme_error_stat": false, 00:16:42.077 "rdma_srq_size": 0, 00:16:42.077 "io_path_stat": false, 00:16:42.077 "allow_accel_sequence": false, 00:16:42.077 "rdma_max_cq_size": 0, 00:16:42.077 "rdma_cm_event_timeout_ms": 0, 00:16:42.077 "dhchap_digests": [ 00:16:42.077 "sha256", 00:16:42.077 "sha384", 00:16:42.077 "sha512" 00:16:42.077 ], 00:16:42.077 "dhchap_dhgroups": [ 00:16:42.077 "null", 00:16:42.077 "ffdhe2048", 00:16:42.077 "ffdhe3072", 00:16:42.077 "ffdhe4096", 00:16:42.077 "ffdhe6144", 00:16:42.077 "ffdhe8192" 00:16:42.078 ] 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "bdev_nvme_set_hotplug", 00:16:42.078 "params": { 00:16:42.078 "period_us": 100000, 00:16:42.078 "enable": false 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "bdev_malloc_create", 00:16:42.078 "params": { 00:16:42.078 "name": "malloc0", 00:16:42.078 "num_blocks": 8192, 00:16:42.078 "block_size": 4096, 00:16:42.078 "physical_block_size": 4096, 00:16:42.078 "uuid": "60bccc34-7a1b-48cf-8407-fecfcb3474dc", 00:16:42.078 "optimal_io_boundary": 0, 00:16:42.078 "md_size": 0, 00:16:42.078 "dif_type": 0, 00:16:42.078 "dif_is_head_of_md": false, 00:16:42.078 "dif_pi_format": 0 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "bdev_wait_for_examine" 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "scsi", 00:16:42.078 "config": null 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "scheduler", 00:16:42.078 "config": [ 00:16:42.078 { 00:16:42.078 "method": "framework_set_scheduler", 00:16:42.078 "params": { 00:16:42.078 "name": "static" 00:16:42.078 } 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "vhost_scsi", 00:16:42.078 "config": [] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "vhost_blk", 00:16:42.078 "config": [] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "ublk", 00:16:42.078 "config": [ 00:16:42.078 { 00:16:42.078 "method": "ublk_create_target", 00:16:42.078 "params": { 00:16:42.078 "cpumask": "1" 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "ublk_start_disk", 00:16:42.078 "params": { 00:16:42.078 "bdev_name": "malloc0", 00:16:42.078 "ublk_id": 0, 00:16:42.078 "num_queues": 1, 00:16:42.078 "queue_depth": 128 00:16:42.078 } 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "nbd", 00:16:42.078 "config": [] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "nvmf", 00:16:42.078 "config": [ 00:16:42.078 { 00:16:42.078 "method": "nvmf_set_config", 00:16:42.078 "params": { 00:16:42.078 "discovery_filter": "match_any", 00:16:42.078 "admin_cmd_passthru": { 00:16:42.078 "identify_ctrlr": false 00:16:42.078 }, 00:16:42.078 "dhchap_digests": [ 00:16:42.078 "sha256", 00:16:42.078 "sha384", 00:16:42.078 "sha512" 00:16:42.078 ], 00:16:42.078 "dhchap_dhgroups": [ 00:16:42.078 "null", 00:16:42.078 "ffdhe2048", 00:16:42.078 "ffdhe3072", 00:16:42.078 "ffdhe4096", 00:16:42.078 "ffdhe6144", 00:16:42.078 "ffdhe8192" 00:16:42.078 ] 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "nvmf_set_max_subsystems", 00:16:42.078 "params": { 00:16:42.078 "max_subsystems": 1024 00:16:42.078 } 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "method": "nvmf_set_crdt", 00:16:42.078 "params": { 00:16:42.078 "crdt1": 0, 00:16:42.078 "crdt2": 0, 00:16:42.078 "crdt3": 0 00:16:42.078 } 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 }, 00:16:42.078 { 00:16:42.078 "subsystem": "iscsi", 00:16:42.078 "config": [ 00:16:42.078 { 00:16:42.078 "method": "iscsi_set_options", 00:16:42.078 "params": { 00:16:42.078 "node_base": "iqn.2016-06.io.spdk", 00:16:42.078 "max_sessions": 128, 00:16:42.078 "max_connections_per_session": 2, 00:16:42.078 "max_queue_depth": 64, 00:16:42.078 "default_time2wait": 2, 00:16:42.078 "default_time2retain": 20, 00:16:42.078 "first_burst_length": 8192, 00:16:42.078 "immediate_data": true, 00:16:42.078 "allow_duplicated_isid": false, 00:16:42.078 "error_recovery_level": 0, 00:16:42.078 "nop_timeout": 60, 00:16:42.078 "nop_in_interval": 30, 00:16:42.078 "disable_chap": false, 00:16:42.078 "require_chap": false, 00:16:42.078 "mutual_chap": false, 00:16:42.078 "chap_group": 0, 00:16:42.078 "max_large_datain_per_connection": 64, 00:16:42.078 "max_r2t_per_connection": 4, 00:16:42.078 "pdu_pool_size": 36864, 00:16:42.078 "immediate_data_pool_size": 16384, 00:16:42.078 "data_out_pool_size": 2048 00:16:42.078 } 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 } 00:16:42.078 ] 00:16:42.078 }' 00:16:42.078 [2024-11-21 03:25:29.455391] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:42.078 [2024-11-21 03:25:29.455666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86448 ] 00:16:42.078 [2024-11-21 03:25:29.590264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.078 [2024-11-21 03:25:29.615650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.339 [2024-11-21 03:25:29.640300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.600 [2024-11-21 03:25:29.944911] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:42.600 [2024-11-21 03:25:29.945132] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:42.600 [2024-11-21 03:25:29.953002] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:42.600 [2024-11-21 03:25:29.953063] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:42.600 [2024-11-21 03:25:29.953071] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:42.600 [2024-11-21 03:25:29.953076] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:42.600 [2024-11-21 03:25:29.961968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:42.600 [2024-11-21 03:25:29.961990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:42.600 [2024-11-21 03:25:29.968920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:42.600 [2024-11-21 03:25:29.968990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:42.600 [2024-11-21 03:25:29.985913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86448 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86448 ']' 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86448 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86448 00:16:42.860 killing process with pid 86448 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86448' 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86448 00:16:42.860 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86448 00:16:43.122 [2024-11-21 03:25:30.500554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.122 [2024-11-21 03:25:30.536931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.122 [2024-11-21 03:25:30.537025] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.122 [2024-11-21 03:25:30.545928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.122 [2024-11-21 03:25:30.545973] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.122 [2024-11-21 03:25:30.545979] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.122 [2024-11-21 03:25:30.546001] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.122 [2024-11-21 03:25:30.546107] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.383 03:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:43.383 00:16:43.383 real 0m3.430s 00:16:43.383 user 0m2.281s 00:16:43.383 sys 0m1.742s 00:16:43.383 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.383 ************************************ 00:16:43.383 END TEST test_save_ublk_config 00:16:43.383 ************************************ 00:16:43.383 03:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.383 03:25:30 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86493 00:16:43.383 03:25:30 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.383 03:25:30 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86493 00:16:43.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@835 -- # '[' -z 86493 ']' 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.383 03:25:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.383 03:25:30 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:43.383 [2024-11-21 03:25:30.935039] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:43.383 [2024-11-21 03:25:30.935153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86493 ] 00:16:43.643 [2024-11-21 03:25:31.069058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.643 [2024-11-21 03:25:31.092561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.643 [2024-11-21 03:25:31.116759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.643 [2024-11-21 03:25:31.116818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.216 03:25:31 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.216 03:25:31 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:44.216 03:25:31 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:44.216 03:25:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.216 03:25:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.216 03:25:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.216 ************************************ 00:16:44.216 START TEST test_create_ublk 00:16:44.216 ************************************ 00:16:44.216 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:44.216 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:44.216 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.216 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.477 [2024-11-21 03:25:31.781916] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:44.477 [2024-11-21 03:25:31.782869] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.477 [2024-11-21 03:25:31.838017] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:44.477 [2024-11-21 03:25:31.838314] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:44.477 [2024-11-21 03:25:31.838324] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:44.477 [2024-11-21 03:25:31.838330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.477 [2024-11-21 03:25:31.847085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.477 [2024-11-21 03:25:31.847110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.477 [2024-11-21 03:25:31.853923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.477 [2024-11-21 03:25:31.854395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:44.477 [2024-11-21 03:25:31.881929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.477 03:25:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:44.477 { 00:16:44.477 "ublk_device": "/dev/ublkb0", 00:16:44.477 "id": 0, 00:16:44.477 "queue_depth": 512, 00:16:44.477 "num_queues": 4, 00:16:44.477 "bdev_name": "Malloc0" 00:16:44.477 } 00:16:44.477 ]' 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:44.477 03:25:31 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:44.477 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:44.477 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:44.477 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:44.477 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:44.739 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:44.739 03:25:32 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:44.739 03:25:32 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:44.739 fio: verification read phase will never start because write phase uses all of runtime 00:16:44.739 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:44.739 fio-3.35 00:16:44.739 Starting 1 process 00:16:54.804 00:16:54.804 fio_test: (groupid=0, jobs=1): err= 0: pid=86539: Thu Nov 21 03:25:42 2024 00:16:54.804 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(554MiB/10001msec); 0 zone resets 00:16:54.804 clat (usec): min=35, max=8012, avg=69.80, stdev=122.60 00:16:54.804 lat (usec): min=36, max=8013, avg=70.22, stdev=122.61 00:16:54.804 clat percentiles (usec): 00:16:54.804 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 61], 00:16:54.804 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 66], 00:16:54.804 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 76], 00:16:54.804 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 2540], 99.95th=[ 3621], 00:16:54.804 | 99.99th=[ 4015] 00:16:54.804 bw ( KiB/s): min=28656, max=59400, per=100.00%, avg=56700.63, stdev=6960.20, samples=19 00:16:54.804 iops : min= 7164, max=14850, avg=14175.16, stdev=1740.05, samples=19 00:16:54.804 lat (usec) : 50=0.60%, 100=99.02%, 250=0.17%, 500=0.01%, 750=0.01% 00:16:54.804 lat (usec) : 1000=0.01% 00:16:54.804 lat (msec) : 2=0.06%, 4=0.10%, 10=0.02% 00:16:54.804 cpu : usr=2.62%, sys=10.95%, ctx=141769, majf=0, minf=795 00:16:54.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.804 issued rwts: total=0,141762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.804 00:16:54.804 Run status group 0 (all jobs): 00:16:54.804 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=554MiB (581MB), run=10001-10001msec 00:16:54.804 00:16:54.804 Disk stats (read/write): 00:16:54.804 ublkb0: ios=0/140263, merge=0/0, ticks=0/8619, in_queue=8620, util=99.09% 00:16:54.804 03:25:42 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.804 [2024-11-21 03:25:42.298953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:54.804 [2024-11-21 03:25:42.345426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:54.804 [2024-11-21 03:25:42.346404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:54.804 [2024-11-21 03:25:42.352945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:54.804 [2024-11-21 03:25:42.353196] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:54.804 [2024-11-21 03:25:42.353209] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.804 03:25:42 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.804 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 [2024-11-21 03:25:42.369003] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:55.064 request: 00:16:55.064 { 00:16:55.064 "ublk_id": 0, 00:16:55.064 "method": "ublk_stop_disk", 00:16:55.064 "req_id": 1 00:16:55.064 } 00:16:55.064 Got JSON-RPC error response 00:16:55.064 response: 00:16:55.064 { 00:16:55.064 "code": -19, 00:16:55.064 "message": "No such device" 00:16:55.064 } 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.064 03:25:42 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 [2024-11-21 03:25:42.384974] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:55.064 [2024-11-21 03:25:42.386247] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:55.064 [2024-11-21 03:25:42.386278] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:55.064 ************************************ 00:16:55.064 END TEST test_create_ublk 00:16:55.064 ************************************ 00:16:55.064 03:25:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:55.064 00:16:55.064 real 0m10.777s 00:16:55.064 user 0m0.566s 00:16:55.064 sys 0m1.168s 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 03:25:42 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:55.064 03:25:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:55.064 03:25:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.064 03:25:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 ************************************ 00:16:55.064 START TEST test_create_multi_ublk 00:16:55.064 ************************************ 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 [2024-11-21 03:25:42.596918] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:55.064 [2024-11-21 03:25:42.597835] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.064 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 [2024-11-21 03:25:42.669028] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:55.323 [2024-11-21 03:25:42.669322] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:55.323 [2024-11-21 03:25:42.669335] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:55.323 [2024-11-21 03:25:42.669341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.323 [2024-11-21 03:25:42.692927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.323 [2024-11-21 03:25:42.692950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.323 [2024-11-21 03:25:42.704924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.323 [2024-11-21 03:25:42.705413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:55.323 [2024-11-21 03:25:42.740919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.323 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 [2024-11-21 03:25:42.825011] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:55.323 [2024-11-21 03:25:42.825302] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:55.323 [2024-11-21 03:25:42.825316] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:55.323 [2024-11-21 03:25:42.825321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.323 [2024-11-21 03:25:42.836931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.323 [2024-11-21 03:25:42.836949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.323 [2024-11-21 03:25:42.848922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.323 [2024-11-21 03:25:42.849410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:55.323 [2024-11-21 03:25:42.873930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.582 03:25:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.582 [2024-11-21 03:25:42.957010] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:55.582 [2024-11-21 03:25:42.957307] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:55.582 [2024-11-21 03:25:42.957319] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:55.582 [2024-11-21 03:25:42.957325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.582 [2024-11-21 03:25:42.968927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.582 [2024-11-21 03:25:42.968947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.582 [2024-11-21 03:25:42.980922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.582 [2024-11-21 03:25:42.981405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:55.582 [2024-11-21 03:25:42.993944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.582 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.582 [2024-11-21 03:25:43.077022] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:55.582 [2024-11-21 03:25:43.077316] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:55.583 [2024-11-21 03:25:43.077329] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:55.583 [2024-11-21 03:25:43.077334] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.583 [2024-11-21 03:25:43.090095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.583 [2024-11-21 03:25:43.090113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.583 [2024-11-21 03:25:43.100932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.583 [2024-11-21 03:25:43.101409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:55.583 [2024-11-21 03:25:43.113960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.583 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.583 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:55.583 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:55.583 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.583 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:55.841 { 00:16:55.841 "ublk_device": "/dev/ublkb0", 00:16:55.841 "id": 0, 00:16:55.841 "queue_depth": 512, 00:16:55.841 "num_queues": 4, 00:16:55.841 "bdev_name": "Malloc0" 00:16:55.841 }, 00:16:55.841 { 00:16:55.841 "ublk_device": "/dev/ublkb1", 00:16:55.841 "id": 1, 00:16:55.841 "queue_depth": 512, 00:16:55.841 "num_queues": 4, 00:16:55.841 "bdev_name": "Malloc1" 00:16:55.841 }, 00:16:55.841 { 00:16:55.841 "ublk_device": "/dev/ublkb2", 00:16:55.841 "id": 2, 00:16:55.841 "queue_depth": 512, 00:16:55.841 "num_queues": 4, 00:16:55.841 "bdev_name": "Malloc2" 00:16:55.841 }, 00:16:55.841 { 00:16:55.841 "ublk_device": "/dev/ublkb3", 00:16:55.841 "id": 3, 00:16:55.841 "queue_depth": 512, 00:16:55.841 "num_queues": 4, 00:16:55.841 "bdev_name": "Malloc3" 00:16:55.841 } 00:16:55.841 ]' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:55.841 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:56.100 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.359 [2024-11-21 03:25:43.769006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:56.359 [2024-11-21 03:25:43.807436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:56.359 [2024-11-21 03:25:43.808579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:56.359 [2024-11-21 03:25:43.816929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:56.359 [2024-11-21 03:25:43.817188] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:56.359 [2024-11-21 03:25:43.817202] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.359 [2024-11-21 03:25:43.833005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:56.359 [2024-11-21 03:25:43.872961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:56.359 [2024-11-21 03:25:43.873816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:56.359 [2024-11-21 03:25:43.880934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:56.359 [2024-11-21 03:25:43.881159] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:56.359 [2024-11-21 03:25:43.881174] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.359 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.359 [2024-11-21 03:25:43.896973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:56.618 [2024-11-21 03:25:43.926432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:56.618 [2024-11-21 03:25:43.927464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:56.618 [2024-11-21 03:25:43.936929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:56.618 [2024-11-21 03:25:43.937170] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:56.618 [2024-11-21 03:25:43.937188] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:56.618 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.618 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.618 03:25:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:56.618 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.618 03:25:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.618 [2024-11-21 03:25:43.952987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:56.618 [2024-11-21 03:25:43.994955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:56.618 [2024-11-21 03:25:43.995612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:56.618 [2024-11-21 03:25:44.004961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:56.618 [2024-11-21 03:25:44.005189] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:56.618 [2024-11-21 03:25:44.005202] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:56.618 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.618 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:56.876 [2024-11-21 03:25:44.195981] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:56.876 [2024-11-21 03:25:44.197293] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:56.876 [2024-11-21 03:25:44.197321] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.876 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.877 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:57.135 ************************************ 00:16:57.135 END TEST test_create_multi_ublk 00:16:57.135 ************************************ 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:57.135 00:16:57.135 real 0m1.961s 00:16:57.135 user 0m0.794s 00:16:57.135 sys 0m0.139s 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.135 03:25:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.135 03:25:44 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:57.135 03:25:44 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:57.135 03:25:44 ublk -- ublk/ublk.sh@130 -- # killprocess 86493 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@954 -- # '[' -z 86493 ']' 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@958 -- # kill -0 86493 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@959 -- # uname 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86493 00:16:57.135 killing process with pid 86493 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86493' 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@973 -- # kill 86493 00:16:57.135 03:25:44 ublk -- common/autotest_common.sh@978 -- # wait 86493 00:16:57.395 [2024-11-21 03:25:44.766541] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:57.395 [2024-11-21 03:25:44.766602] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:57.656 00:16:57.656 real 0m17.836s 00:16:57.656 user 0m28.118s 00:16:57.656 sys 0m6.979s 00:16:57.656 03:25:45 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.656 03:25:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.656 ************************************ 00:16:57.656 END TEST ublk 00:16:57.656 ************************************ 00:16:57.656 03:25:45 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:57.656 03:25:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.656 03:25:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.656 03:25:45 -- common/autotest_common.sh@10 -- # set +x 00:16:57.656 ************************************ 00:16:57.656 START TEST ublk_recovery 00:16:57.656 ************************************ 00:16:57.656 03:25:45 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:57.656 * Looking for test storage... 00:16:57.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:57.656 03:25:45 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.656 03:25:45 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.656 03:25:45 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.656 03:25:45 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.656 03:25:45 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.916 03:25:45 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.916 --rc genhtml_branch_coverage=1 00:16:57.916 --rc genhtml_function_coverage=1 00:16:57.916 --rc genhtml_legend=1 00:16:57.916 --rc geninfo_all_blocks=1 00:16:57.916 --rc geninfo_unexecuted_blocks=1 00:16:57.916 00:16:57.916 ' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.916 --rc genhtml_branch_coverage=1 00:16:57.916 --rc genhtml_function_coverage=1 00:16:57.916 --rc genhtml_legend=1 00:16:57.916 --rc geninfo_all_blocks=1 00:16:57.916 --rc geninfo_unexecuted_blocks=1 00:16:57.916 00:16:57.916 ' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.916 --rc genhtml_branch_coverage=1 00:16:57.916 --rc genhtml_function_coverage=1 00:16:57.916 --rc genhtml_legend=1 00:16:57.916 --rc geninfo_all_blocks=1 00:16:57.916 --rc geninfo_unexecuted_blocks=1 00:16:57.916 00:16:57.916 ' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.916 --rc genhtml_branch_coverage=1 00:16:57.916 --rc genhtml_function_coverage=1 00:16:57.916 --rc genhtml_legend=1 00:16:57.916 --rc geninfo_all_blocks=1 00:16:57.916 --rc geninfo_unexecuted_blocks=1 00:16:57.916 00:16:57.916 ' 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:57.916 03:25:45 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86857 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86857 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86857 ']' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.916 03:25:45 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:57.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.916 03:25:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.916 [2024-11-21 03:25:45.296492] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:57.916 [2024-11-21 03:25:45.296589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86857 ] 00:16:57.916 [2024-11-21 03:25:45.422871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:57.916 [2024-11-21 03:25:45.451791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:57.916 [2024-11-21 03:25:45.473349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.916 [2024-11-21 03:25:45.473414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:58.851 03:25:46 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.851 [2024-11-21 03:25:46.144922] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:58.851 [2024-11-21 03:25:46.146004] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.851 03:25:46 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.851 malloc0 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.851 03:25:46 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.851 [2024-11-21 03:25:46.177252] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:58.851 [2024-11-21 03:25:46.177364] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:58.851 [2024-11-21 03:25:46.177375] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:58.851 [2024-11-21 03:25:46.177382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.851 [2024-11-21 03:25:46.186022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.851 [2024-11-21 03:25:46.186037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.851 [2024-11-21 03:25:46.192936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.851 [2024-11-21 03:25:46.193075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:58.851 [2024-11-21 03:25:46.207925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.851 1 00:16:58.851 03:25:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.851 03:25:46 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:59.785 03:25:47 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86890 00:16:59.785 03:25:47 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:59.785 03:25:47 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:59.785 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.785 fio-3.35 00:16:59.785 Starting 1 process 00:17:05.053 03:25:52 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86857 00:17:05.053 03:25:52 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:10.342 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86857 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:10.342 03:25:57 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87003 00:17:10.342 03:25:57 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.342 03:25:57 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:10.342 03:25:57 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87003 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87003 ']' 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.342 03:25:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.342 [2024-11-21 03:25:57.321238] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:10.342 [2024-11-21 03:25:57.321393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87003 ] 00:17:10.342 [2024-11-21 03:25:57.459125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:10.342 [2024-11-21 03:25:57.484286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.343 [2024-11-21 03:25:57.502133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.343 [2024-11-21 03:25:57.502157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:10.604 03:25:58 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.604 [2024-11-21 03:25:58.146916] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:10.604 [2024-11-21 03:25:58.147850] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.604 03:25:58 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.604 03:25:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 malloc0 00:17:10.865 03:25:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.865 03:25:58 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:10.865 03:25:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.865 03:25:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 [2024-11-21 03:25:58.179045] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:10.865 [2024-11-21 03:25:58.179076] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:10.865 [2024-11-21 03:25:58.179084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:10.865 [2024-11-21 03:25:58.186943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:10.865 [2024-11-21 03:25:58.186965] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:10.865 [2024-11-21 03:25:58.186978] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:10.865 [2024-11-21 03:25:58.187040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:10.865 1 00:17:10.865 03:25:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.865 03:25:58 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86890 00:17:10.865 [2024-11-21 03:25:58.194933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:10.865 [2024-11-21 03:25:58.201270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:10.865 [2024-11-21 03:25:58.209098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:10.865 [2024-11-21 03:25:58.209116] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:07.086 00:18:07.086 fio_test: (groupid=0, jobs=1): err= 0: pid=86893: Thu Nov 21 03:26:47 2024 00:18:07.086 read: IOPS=27.8k, BW=108MiB/s (114MB/s)(6510MiB/60002msec) 00:18:07.086 slat (nsec): min=1039, max=355418, avg=4802.82, stdev=1241.72 00:18:07.086 clat (usec): min=765, max=5995.0k, avg=2269.21, stdev=37709.41 00:18:07.086 lat (usec): min=771, max=5995.0k, avg=2274.01, stdev=37709.41 00:18:07.086 clat percentiles (usec): 00:18:07.086 | 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1876], 00:18:07.086 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:18:07.086 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2008], 95.00th=[ 2769], 00:18:07.086 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 5932], 99.95th=[ 7701], 00:18:07.086 | 99.99th=[12780] 00:18:07.086 bw ( KiB/s): min=25992, max=127064, per=100.00%, avg=122365.25, stdev=12857.43, samples=108 00:18:07.086 iops : min= 6498, max=31766, avg=30591.32, stdev=3214.36, samples=108 00:18:07.086 write: IOPS=27.8k, BW=108MiB/s (114MB/s)(6504MiB/60002msec); 0 zone resets 00:18:07.086 slat (nsec): min=1061, max=389275, avg=4827.41, stdev=1256.96 00:18:07.086 clat (usec): min=830, max=5995.3k, avg=2330.82, stdev=36565.24 00:18:07.086 lat (usec): min=836, max=5995.3k, avg=2335.64, stdev=36565.24 00:18:07.086 clat percentiles (usec): 00:18:07.086 | 1.00th=[ 1795], 5.00th=[ 1926], 10.00th=[ 1942], 20.00th=[ 1975], 00:18:07.086 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2008], 60.00th=[ 2024], 00:18:07.086 | 70.00th=[ 2040], 80.00th=[ 2057], 90.00th=[ 2114], 95.00th=[ 2671], 00:18:07.086 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 5997], 99.95th=[ 7504], 00:18:07.086 | 99.99th=[12911] 00:18:07.086 bw ( KiB/s): min=26024, max=126624, per=100.00%, avg=122253.11, stdev=12876.40, samples=108 00:18:07.086 iops : min= 6506, max=31656, avg=30563.28, stdev=3219.10, samples=108 00:18:07.086 lat (usec) : 1000=0.01% 00:18:07.086 lat (msec) : 2=64.59%, 4=33.23%, 10=2.16%, 20=0.02%, >=2000=0.01% 00:18:07.086 cpu : usr=5.99%, sys=27.46%, ctx=109099, majf=0, minf=13 00:18:07.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:07.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:07.086 issued rwts: total=1666444,1665104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:07.086 00:18:07.086 Run status group 0 (all jobs): 00:18:07.086 READ: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=6510MiB (6826MB), run=60002-60002msec 00:18:07.086 WRITE: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=6504MiB (6820MB), run=60002-60002msec 00:18:07.086 00:18:07.086 Disk stats (read/write): 00:18:07.086 ublkb1: ios=1663142/1661876, merge=0/0, ticks=3691323/3656708, in_queue=7348031, util=99.89% 00:18:07.086 03:26:47 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:07.086 03:26:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.086 03:26:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.086 [2024-11-21 03:26:47.472800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:07.086 [2024-11-21 03:26:47.509950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:07.086 [2024-11-21 03:26:47.510123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:07.086 [2024-11-21 03:26:47.518932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:07.086 [2024-11-21 03:26:47.519027] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:07.086 [2024-11-21 03:26:47.519036] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:07.086 03:26:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.086 03:26:47 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:07.086 03:26:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.086 03:26:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 [2024-11-21 03:26:47.532983] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:07.087 [2024-11-21 03:26:47.534218] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:07.087 [2024-11-21 03:26:47.534247] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.087 03:26:47 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:07.087 03:26:47 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:07.087 03:26:47 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87003 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87003 ']' 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87003 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87003 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.087 killing process with pid 87003 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87003' 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87003 00:18:07.087 03:26:47 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87003 00:18:07.087 [2024-11-21 03:26:47.733181] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:07.087 [2024-11-21 03:26:47.733245] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:07.087 00:18:07.087 real 1m2.917s 00:18:07.087 user 1m41.119s 00:18:07.087 sys 0m34.190s 00:18:07.087 03:26:48 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.087 ************************************ 00:18:07.087 END TEST ublk_recovery 00:18:07.087 ************************************ 00:18:07.087 03:26:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 03:26:48 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:07.087 03:26:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:07.087 03:26:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.087 03:26:48 -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 03:26:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:07.087 03:26:48 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:07.087 03:26:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.087 03:26:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.087 03:26:48 -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 ************************************ 00:18:07.087 START TEST ftl 00:18:07.087 ************************************ 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:07.087 * Looking for test storage... 00:18:07.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.087 03:26:48 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.087 03:26:48 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.087 03:26:48 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.087 03:26:48 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.087 03:26:48 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.087 03:26:48 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:07.087 03:26:48 ftl -- scripts/common.sh@345 -- # : 1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.087 03:26:48 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.087 03:26:48 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@353 -- # local d=1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.087 03:26:48 ftl -- scripts/common.sh@355 -- # echo 1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.087 03:26:48 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@353 -- # local d=2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.087 03:26:48 ftl -- scripts/common.sh@355 -- # echo 2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.087 03:26:48 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.087 03:26:48 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.087 03:26:48 ftl -- scripts/common.sh@368 -- # return 0 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.087 --rc genhtml_branch_coverage=1 00:18:07.087 --rc genhtml_function_coverage=1 00:18:07.087 --rc genhtml_legend=1 00:18:07.087 --rc geninfo_all_blocks=1 00:18:07.087 --rc geninfo_unexecuted_blocks=1 00:18:07.087 00:18:07.087 ' 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.087 --rc genhtml_branch_coverage=1 00:18:07.087 --rc genhtml_function_coverage=1 00:18:07.087 --rc genhtml_legend=1 00:18:07.087 --rc geninfo_all_blocks=1 00:18:07.087 --rc geninfo_unexecuted_blocks=1 00:18:07.087 00:18:07.087 ' 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.087 --rc genhtml_branch_coverage=1 00:18:07.087 --rc genhtml_function_coverage=1 00:18:07.087 --rc genhtml_legend=1 00:18:07.087 --rc geninfo_all_blocks=1 00:18:07.087 --rc geninfo_unexecuted_blocks=1 00:18:07.087 00:18:07.087 ' 00:18:07.087 03:26:48 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.087 --rc genhtml_branch_coverage=1 00:18:07.087 --rc genhtml_function_coverage=1 00:18:07.087 --rc genhtml_legend=1 00:18:07.087 --rc geninfo_all_blocks=1 00:18:07.087 --rc geninfo_unexecuted_blocks=1 00:18:07.087 00:18:07.087 ' 00:18:07.087 03:26:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:07.087 03:26:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:07.087 03:26:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.087 03:26:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.087 03:26:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:07.087 03:26:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:07.087 03:26:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.087 03:26:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:07.087 03:26:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:07.087 03:26:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.088 03:26:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.088 03:26:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:07.088 03:26:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:07.088 03:26:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.088 03:26:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.088 03:26:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:07.088 03:26:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:07.088 03:26:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.088 03:26:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.088 03:26:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:07.088 03:26:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:07.088 03:26:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.088 03:26:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.088 03:26:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.088 03:26:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.088 03:26:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:07.088 03:26:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:07.088 03:26:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.088 03:26:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:07.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.088 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.088 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.088 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.088 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87806 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87806 00:18:07.088 03:26:48 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@835 -- # '[' -z 87806 ']' 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.088 03:26:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:07.088 [2024-11-21 03:26:48.805835] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:07.088 [2024-11-21 03:26:48.806025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87806 ] 00:18:07.088 [2024-11-21 03:26:48.946407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:07.088 [2024-11-21 03:26:48.971292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.088 [2024-11-21 03:26:48.996428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.088 03:26:49 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.088 03:26:49 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:07.088 03:26:49 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:07.088 03:26:49 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@50 -- # break 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@63 -- # break 00:18:07.088 03:26:50 ftl -- ftl/ftl.sh@66 -- # killprocess 87806 00:18:07.088 03:26:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 87806 ']' 00:18:07.088 03:26:50 ftl -- common/autotest_common.sh@958 -- # kill -0 87806 00:18:07.088 03:26:50 ftl -- common/autotest_common.sh@959 -- # uname 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87806 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.088 killing process with pid 87806 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87806' 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@973 -- # kill 87806 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@978 -- # wait 87806 00:18:07.088 03:26:51 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:07.088 03:26:51 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.088 03:26:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:07.088 ************************************ 00:18:07.088 START TEST ftl_fio_basic 00:18:07.088 ************************************ 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:07.088 * Looking for test storage... 00:18:07.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.088 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.089 --rc genhtml_branch_coverage=1 00:18:07.089 --rc genhtml_function_coverage=1 00:18:07.089 --rc genhtml_legend=1 00:18:07.089 --rc geninfo_all_blocks=1 00:18:07.089 --rc geninfo_unexecuted_blocks=1 00:18:07.089 00:18:07.089 ' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.089 --rc genhtml_branch_coverage=1 00:18:07.089 --rc genhtml_function_coverage=1 00:18:07.089 --rc genhtml_legend=1 00:18:07.089 --rc geninfo_all_blocks=1 00:18:07.089 --rc geninfo_unexecuted_blocks=1 00:18:07.089 00:18:07.089 ' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.089 --rc genhtml_branch_coverage=1 00:18:07.089 --rc genhtml_function_coverage=1 00:18:07.089 --rc genhtml_legend=1 00:18:07.089 --rc geninfo_all_blocks=1 00:18:07.089 --rc geninfo_unexecuted_blocks=1 00:18:07.089 00:18:07.089 ' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.089 --rc genhtml_branch_coverage=1 00:18:07.089 --rc genhtml_function_coverage=1 00:18:07.089 --rc genhtml_legend=1 00:18:07.089 --rc geninfo_all_blocks=1 00:18:07.089 --rc geninfo_unexecuted_blocks=1 00:18:07.089 00:18:07.089 ' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:07.089 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87916 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87916 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87916 ']' 00:18:07.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:07.090 03:26:51 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:07.090 [2024-11-21 03:26:51.497373] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:07.090 [2024-11-21 03:26:51.497495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87916 ] 00:18:07.090 [2024-11-21 03:26:51.631952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:07.090 [2024-11-21 03:26:51.656464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.090 [2024-11-21 03:26:51.677376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.090 [2024-11-21 03:26:51.677680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.090 [2024-11-21 03:26:51.677768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:07.090 { 00:18:07.090 "name": "nvme0n1", 00:18:07.090 "aliases": [ 00:18:07.090 "0006928b-7e76-48de-9b18-24af2eb0a170" 00:18:07.090 ], 00:18:07.090 "product_name": "NVMe disk", 00:18:07.090 "block_size": 4096, 00:18:07.090 "num_blocks": 1310720, 00:18:07.090 "uuid": "0006928b-7e76-48de-9b18-24af2eb0a170", 00:18:07.090 "numa_id": -1, 00:18:07.090 "assigned_rate_limits": { 00:18:07.090 "rw_ios_per_sec": 0, 00:18:07.090 "rw_mbytes_per_sec": 0, 00:18:07.090 "r_mbytes_per_sec": 0, 00:18:07.090 "w_mbytes_per_sec": 0 00:18:07.090 }, 00:18:07.090 "claimed": false, 00:18:07.090 "zoned": false, 00:18:07.090 "supported_io_types": { 00:18:07.090 "read": true, 00:18:07.090 "write": true, 00:18:07.090 "unmap": true, 00:18:07.090 "flush": true, 00:18:07.090 "reset": true, 00:18:07.090 "nvme_admin": true, 00:18:07.090 "nvme_io": true, 00:18:07.090 "nvme_io_md": false, 00:18:07.090 "write_zeroes": true, 00:18:07.090 "zcopy": false, 00:18:07.090 "get_zone_info": false, 00:18:07.090 "zone_management": false, 00:18:07.090 "zone_append": false, 00:18:07.090 "compare": true, 00:18:07.090 "compare_and_write": false, 00:18:07.090 "abort": true, 00:18:07.090 "seek_hole": false, 00:18:07.090 "seek_data": false, 00:18:07.090 "copy": true, 00:18:07.090 "nvme_iov_md": false 00:18:07.090 }, 00:18:07.090 "driver_specific": { 00:18:07.090 "nvme": [ 00:18:07.090 { 00:18:07.090 "pci_address": "0000:00:11.0", 00:18:07.090 "trid": { 00:18:07.090 "trtype": "PCIe", 00:18:07.090 "traddr": "0000:00:11.0" 00:18:07.090 }, 00:18:07.090 "ctrlr_data": { 00:18:07.090 "cntlid": 0, 00:18:07.090 "vendor_id": "0x1b36", 00:18:07.090 "model_number": "QEMU NVMe Ctrl", 00:18:07.090 "serial_number": "12341", 00:18:07.090 "firmware_revision": "8.0.0", 00:18:07.090 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:07.090 "oacs": { 00:18:07.090 "security": 0, 00:18:07.090 "format": 1, 00:18:07.090 "firmware": 0, 00:18:07.090 "ns_manage": 1 00:18:07.090 }, 00:18:07.090 "multi_ctrlr": false, 00:18:07.090 "ana_reporting": false 00:18:07.090 }, 00:18:07.090 "vs": { 00:18:07.090 "nvme_version": "1.4" 00:18:07.090 }, 00:18:07.090 "ns_data": { 00:18:07.090 "id": 1, 00:18:07.090 "can_share": false 00:18:07.090 } 00:18:07.090 } 00:18:07.090 ], 00:18:07.090 "mp_policy": "active_passive" 00:18:07.090 } 00:18:07.090 } 00:18:07.090 ]' 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:07.090 03:26:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d07412b2-2f4f-499c-b334-a66ac18ec7e3 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d07412b2-2f4f-499c-b334-a66ac18ec7e3 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.090 03:26:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:07.091 { 00:18:07.091 "name": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.091 "aliases": [ 00:18:07.091 "lvs/nvme0n1p0" 00:18:07.091 ], 00:18:07.091 "product_name": "Logical Volume", 00:18:07.091 "block_size": 4096, 00:18:07.091 "num_blocks": 26476544, 00:18:07.091 "uuid": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.091 "assigned_rate_limits": { 00:18:07.091 "rw_ios_per_sec": 0, 00:18:07.091 "rw_mbytes_per_sec": 0, 00:18:07.091 "r_mbytes_per_sec": 0, 00:18:07.091 "w_mbytes_per_sec": 0 00:18:07.091 }, 00:18:07.091 "claimed": false, 00:18:07.091 "zoned": false, 00:18:07.091 "supported_io_types": { 00:18:07.091 "read": true, 00:18:07.091 "write": true, 00:18:07.091 "unmap": true, 00:18:07.091 "flush": false, 00:18:07.091 "reset": true, 00:18:07.091 "nvme_admin": false, 00:18:07.091 "nvme_io": false, 00:18:07.091 "nvme_io_md": false, 00:18:07.091 "write_zeroes": true, 00:18:07.091 "zcopy": false, 00:18:07.091 "get_zone_info": false, 00:18:07.091 "zone_management": false, 00:18:07.091 "zone_append": false, 00:18:07.091 "compare": false, 00:18:07.091 "compare_and_write": false, 00:18:07.091 "abort": false, 00:18:07.091 "seek_hole": true, 00:18:07.091 "seek_data": true, 00:18:07.091 "copy": false, 00:18:07.091 "nvme_iov_md": false 00:18:07.091 }, 00:18:07.091 "driver_specific": { 00:18:07.091 "lvol": { 00:18:07.091 "lvol_store_uuid": "d07412b2-2f4f-499c-b334-a66ac18ec7e3", 00:18:07.091 "base_bdev": "nvme0n1", 00:18:07.091 "thin_provision": true, 00:18:07.091 "num_allocated_clusters": 0, 00:18:07.091 "snapshot": false, 00:18:07.091 "clone": false, 00:18:07.091 "esnap_clone": false 00:18:07.091 } 00:18:07.091 } 00:18:07.091 } 00:18:07.091 ]' 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:07.091 03:26:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:07.091 { 00:18:07.091 "name": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.091 "aliases": [ 00:18:07.091 "lvs/nvme0n1p0" 00:18:07.091 ], 00:18:07.091 "product_name": "Logical Volume", 00:18:07.091 "block_size": 4096, 00:18:07.091 "num_blocks": 26476544, 00:18:07.091 "uuid": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.091 "assigned_rate_limits": { 00:18:07.091 "rw_ios_per_sec": 0, 00:18:07.091 "rw_mbytes_per_sec": 0, 00:18:07.091 "r_mbytes_per_sec": 0, 00:18:07.091 "w_mbytes_per_sec": 0 00:18:07.091 }, 00:18:07.091 "claimed": false, 00:18:07.091 "zoned": false, 00:18:07.091 "supported_io_types": { 00:18:07.091 "read": true, 00:18:07.091 "write": true, 00:18:07.091 "unmap": true, 00:18:07.091 "flush": false, 00:18:07.091 "reset": true, 00:18:07.091 "nvme_admin": false, 00:18:07.091 "nvme_io": false, 00:18:07.091 "nvme_io_md": false, 00:18:07.091 "write_zeroes": true, 00:18:07.091 "zcopy": false, 00:18:07.091 "get_zone_info": false, 00:18:07.091 "zone_management": false, 00:18:07.091 "zone_append": false, 00:18:07.091 "compare": false, 00:18:07.091 "compare_and_write": false, 00:18:07.091 "abort": false, 00:18:07.091 "seek_hole": true, 00:18:07.091 "seek_data": true, 00:18:07.091 "copy": false, 00:18:07.091 "nvme_iov_md": false 00:18:07.091 }, 00:18:07.091 "driver_specific": { 00:18:07.091 "lvol": { 00:18:07.091 "lvol_store_uuid": "d07412b2-2f4f-499c-b334-a66ac18ec7e3", 00:18:07.091 "base_bdev": "nvme0n1", 00:18:07.091 "thin_provision": true, 00:18:07.091 "num_allocated_clusters": 0, 00:18:07.091 "snapshot": false, 00:18:07.091 "clone": false, 00:18:07.091 "esnap_clone": false 00:18:07.091 } 00:18:07.091 } 00:18:07.091 } 00:18:07.091 ]' 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:07.091 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:07.091 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:07.092 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:07.092 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b2da08d-c738-4189-a320-bcd7dee96b25 00:18:07.092 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:07.092 { 00:18:07.092 "name": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.092 "aliases": [ 00:18:07.092 "lvs/nvme0n1p0" 00:18:07.092 ], 00:18:07.092 "product_name": "Logical Volume", 00:18:07.092 "block_size": 4096, 00:18:07.092 "num_blocks": 26476544, 00:18:07.092 "uuid": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:07.092 "assigned_rate_limits": { 00:18:07.092 "rw_ios_per_sec": 0, 00:18:07.092 "rw_mbytes_per_sec": 0, 00:18:07.092 "r_mbytes_per_sec": 0, 00:18:07.092 "w_mbytes_per_sec": 0 00:18:07.092 }, 00:18:07.092 "claimed": false, 00:18:07.092 "zoned": false, 00:18:07.092 "supported_io_types": { 00:18:07.092 "read": true, 00:18:07.092 "write": true, 00:18:07.092 "unmap": true, 00:18:07.092 "flush": false, 00:18:07.092 "reset": true, 00:18:07.092 "nvme_admin": false, 00:18:07.092 "nvme_io": false, 00:18:07.092 "nvme_io_md": false, 00:18:07.092 "write_zeroes": true, 00:18:07.092 "zcopy": false, 00:18:07.092 "get_zone_info": false, 00:18:07.092 "zone_management": false, 00:18:07.092 "zone_append": false, 00:18:07.092 "compare": false, 00:18:07.092 "compare_and_write": false, 00:18:07.092 "abort": false, 00:18:07.092 "seek_hole": true, 00:18:07.092 "seek_data": true, 00:18:07.092 "copy": false, 00:18:07.092 "nvme_iov_md": false 00:18:07.092 }, 00:18:07.092 "driver_specific": { 00:18:07.092 "lvol": { 00:18:07.092 "lvol_store_uuid": "d07412b2-2f4f-499c-b334-a66ac18ec7e3", 00:18:07.092 "base_bdev": "nvme0n1", 00:18:07.092 "thin_provision": true, 00:18:07.092 "num_allocated_clusters": 0, 00:18:07.092 "snapshot": false, 00:18:07.092 "clone": false, 00:18:07.092 "esnap_clone": false 00:18:07.092 } 00:18:07.092 } 00:18:07.092 } 00:18:07.092 ]' 00:18:07.092 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:07.352 03:26:54 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3b2da08d-c738-4189-a320-bcd7dee96b25 -c nvc0n1p0 --l2p_dram_limit 60 00:18:07.352 [2024-11-21 03:26:54.892033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.892067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:07.352 [2024-11-21 03:26:54.892079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.352 [2024-11-21 03:26:54.892085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.892148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.892156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.352 [2024-11-21 03:26:54.892167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:07.352 [2024-11-21 03:26:54.892173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.892209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:07.352 [2024-11-21 03:26:54.892436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:07.352 [2024-11-21 03:26:54.892450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.892456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.352 [2024-11-21 03:26:54.892472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:07.352 [2024-11-21 03:26:54.892491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.892559] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0fe2f1eb-fe71-455d-9b94-5e2c9754202b 00:18:07.352 [2024-11-21 03:26:54.893551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.893568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:07.352 [2024-11-21 03:26:54.893576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:07.352 [2024-11-21 03:26:54.893584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.898804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.898832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.352 [2024-11-21 03:26:54.898841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.143 ms 00:18:07.352 [2024-11-21 03:26:54.898850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.898940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.898949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.352 [2024-11-21 03:26:54.898964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:07.352 [2024-11-21 03:26:54.898971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.899018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.899027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:07.352 [2024-11-21 03:26:54.899041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:07.352 [2024-11-21 03:26:54.899048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.899091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.352 [2024-11-21 03:26:54.900390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.900411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.352 [2024-11-21 03:26:54.900420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:18:07.352 [2024-11-21 03:26:54.900427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.900465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.900482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:07.352 [2024-11-21 03:26:54.900493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:07.352 [2024-11-21 03:26:54.900500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.900527] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:07.352 [2024-11-21 03:26:54.900644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:07.352 [2024-11-21 03:26:54.900655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:07.352 [2024-11-21 03:26:54.900665] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:07.352 [2024-11-21 03:26:54.900675] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:07.352 [2024-11-21 03:26:54.900685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:07.352 [2024-11-21 03:26:54.900694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:07.352 [2024-11-21 03:26:54.900700] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:07.352 [2024-11-21 03:26:54.900708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:07.352 [2024-11-21 03:26:54.900714] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:07.352 [2024-11-21 03:26:54.900721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.900726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:07.352 [2024-11-21 03:26:54.900733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:07.352 [2024-11-21 03:26:54.900738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.900809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.352 [2024-11-21 03:26:54.900817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:07.352 [2024-11-21 03:26:54.900824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:07.352 [2024-11-21 03:26:54.900836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.352 [2024-11-21 03:26:54.900949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:07.352 [2024-11-21 03:26:54.900956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:07.352 [2024-11-21 03:26:54.900964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.352 [2024-11-21 03:26:54.900969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.352 [2024-11-21 03:26:54.900976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:07.352 [2024-11-21 03:26:54.900981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:07.352 [2024-11-21 03:26:54.900988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:07.352 [2024-11-21 03:26:54.900993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:07.352 [2024-11-21 03:26:54.901001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.353 [2024-11-21 03:26:54.901013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:07.353 [2024-11-21 03:26:54.901023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:07.353 [2024-11-21 03:26:54.901031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.353 [2024-11-21 03:26:54.901036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:07.353 [2024-11-21 03:26:54.901042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:07.353 [2024-11-21 03:26:54.901046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:07.353 [2024-11-21 03:26:54.901058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:07.353 [2024-11-21 03:26:54.901086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:07.353 [2024-11-21 03:26:54.901102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:07.353 [2024-11-21 03:26:54.901120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:07.353 [2024-11-21 03:26:54.901137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:07.353 [2024-11-21 03:26:54.901154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.353 [2024-11-21 03:26:54.901165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:07.353 [2024-11-21 03:26:54.901170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:07.353 [2024-11-21 03:26:54.901176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.353 [2024-11-21 03:26:54.901181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:07.353 [2024-11-21 03:26:54.901187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:07.353 [2024-11-21 03:26:54.901192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:07.353 [2024-11-21 03:26:54.901204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:07.353 [2024-11-21 03:26:54.901211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:07.353 [2024-11-21 03:26:54.901225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:07.353 [2024-11-21 03:26:54.901230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.353 [2024-11-21 03:26:54.901245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:07.353 [2024-11-21 03:26:54.901252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:07.353 [2024-11-21 03:26:54.901256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:07.353 [2024-11-21 03:26:54.901263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:07.353 [2024-11-21 03:26:54.901268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:07.353 [2024-11-21 03:26:54.901275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:07.353 [2024-11-21 03:26:54.901282] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:07.353 [2024-11-21 03:26:54.901299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:07.353 [2024-11-21 03:26:54.901319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:07.353 [2024-11-21 03:26:54.901324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:07.353 [2024-11-21 03:26:54.901330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:07.353 [2024-11-21 03:26:54.901335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:07.353 [2024-11-21 03:26:54.901343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:07.353 [2024-11-21 03:26:54.901348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:07.353 [2024-11-21 03:26:54.901355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:07.353 [2024-11-21 03:26:54.901360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:07.353 [2024-11-21 03:26:54.901366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:07.353 [2024-11-21 03:26:54.901395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:07.353 [2024-11-21 03:26:54.901402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.353 [2024-11-21 03:26:54.901415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.353 [2024-11-21 03:26:54.901420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.353 [2024-11-21 03:26:54.901427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.353 [2024-11-21 03:26:54.901434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.354 [2024-11-21 03:26:54.901442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.354 [2024-11-21 03:26:54.901447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:07.354 [2024-11-21 03:26:54.901462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.354 [2024-11-21 03:26:54.901531] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:07.354 [2024-11-21 03:26:54.901540] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:09.257 [2024-11-21 03:26:56.778984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.779034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:09.257 [2024-11-21 03:26:56.779047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1877.446 ms 00:18:09.257 [2024-11-21 03:26:56.779056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.257 [2024-11-21 03:26:56.787018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.787050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.257 [2024-11-21 03:26:56.787070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.901 ms 00:18:09.257 [2024-11-21 03:26:56.787079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.257 [2024-11-21 03:26:56.787145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.787153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:09.257 [2024-11-21 03:26:56.787168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:09.257 [2024-11-21 03:26:56.787175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.257 [2024-11-21 03:26:56.803597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.803652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.257 [2024-11-21 03:26:56.803667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.376 ms 00:18:09.257 [2024-11-21 03:26:56.803680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.257 [2024-11-21 03:26:56.803725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.803740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.257 [2024-11-21 03:26:56.803750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:09.257 [2024-11-21 03:26:56.803761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.257 [2024-11-21 03:26:56.804171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.257 [2024-11-21 03:26:56.804195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.257 [2024-11-21 03:26:56.804209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:18:09.257 [2024-11-21 03:26:56.804224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.258 [2024-11-21 03:26:56.804377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.258 [2024-11-21 03:26:56.804400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.258 [2024-11-21 03:26:56.804410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:09.258 [2024-11-21 03:26:56.804422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.258 [2024-11-21 03:26:56.810477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.258 [2024-11-21 03:26:56.810512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.258 [2024-11-21 03:26:56.810534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.010 ms 00:18:09.258 [2024-11-21 03:26:56.810548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.258 [2024-11-21 03:26:56.819072] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:09.516 [2024-11-21 03:26:56.831646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.831668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:09.516 [2024-11-21 03:26:56.831678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.012 ms 00:18:09.516 [2024-11-21 03:26:56.831692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.862207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.862234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:09.516 [2024-11-21 03:26:56.862246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:18:09.516 [2024-11-21 03:26:56.862260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.862409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.862417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:09.516 [2024-11-21 03:26:56.862424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:09.516 [2024-11-21 03:26:56.862430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.864756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.864788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:09.516 [2024-11-21 03:26:56.864798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:18:09.516 [2024-11-21 03:26:56.864804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.866713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.866736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:09.516 [2024-11-21 03:26:56.866744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:18:09.516 [2024-11-21 03:26:56.866750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.867015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.867024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:09.516 [2024-11-21 03:26:56.867033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:18:09.516 [2024-11-21 03:26:56.867039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.885441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.885467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:09.516 [2024-11-21 03:26:56.885476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.378 ms 00:18:09.516 [2024-11-21 03:26:56.885482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.888568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.888590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:09.516 [2024-11-21 03:26:56.888598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:18:09.516 [2024-11-21 03:26:56.888604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.890806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.890936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:09.516 [2024-11-21 03:26:56.890951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:18:09.516 [2024-11-21 03:26:56.890957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.893448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.893474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:09.516 [2024-11-21 03:26:56.893484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:18:09.516 [2024-11-21 03:26:56.893489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.893524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.893531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:09.516 [2024-11-21 03:26:56.893539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.516 [2024-11-21 03:26:56.893544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.893608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.516 [2024-11-21 03:26:56.893617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:09.516 [2024-11-21 03:26:56.893624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:09.516 [2024-11-21 03:26:56.893630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.516 [2024-11-21 03:26:56.894441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2002.063 ms, result 0 00:18:09.516 { 00:18:09.516 "name": "ftl0", 00:18:09.516 "uuid": "0fe2f1eb-fe71-455d-9b94-5e2c9754202b" 00:18:09.516 } 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.516 03:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.775 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:09.775 [ 00:18:09.775 { 00:18:09.775 "name": "ftl0", 00:18:09.775 "aliases": [ 00:18:09.775 "0fe2f1eb-fe71-455d-9b94-5e2c9754202b" 00:18:09.775 ], 00:18:09.775 "product_name": "FTL disk", 00:18:09.775 "block_size": 4096, 00:18:09.775 "num_blocks": 20971520, 00:18:09.775 "uuid": "0fe2f1eb-fe71-455d-9b94-5e2c9754202b", 00:18:09.775 "assigned_rate_limits": { 00:18:09.775 "rw_ios_per_sec": 0, 00:18:09.775 "rw_mbytes_per_sec": 0, 00:18:09.775 "r_mbytes_per_sec": 0, 00:18:09.775 "w_mbytes_per_sec": 0 00:18:09.775 }, 00:18:09.775 "claimed": false, 00:18:09.775 "zoned": false, 00:18:09.775 "supported_io_types": { 00:18:09.775 "read": true, 00:18:09.775 "write": true, 00:18:09.775 "unmap": true, 00:18:09.775 "flush": true, 00:18:09.775 "reset": false, 00:18:09.775 "nvme_admin": false, 00:18:09.775 "nvme_io": false, 00:18:09.775 "nvme_io_md": false, 00:18:09.775 "write_zeroes": true, 00:18:09.775 "zcopy": false, 00:18:09.775 "get_zone_info": false, 00:18:09.775 "zone_management": false, 00:18:09.775 "zone_append": false, 00:18:09.775 "compare": false, 00:18:09.775 "compare_and_write": false, 00:18:09.775 "abort": false, 00:18:09.775 "seek_hole": false, 00:18:09.775 "seek_data": false, 00:18:09.775 "copy": false, 00:18:09.775 "nvme_iov_md": false 00:18:09.775 }, 00:18:09.775 "driver_specific": { 00:18:09.775 "ftl": { 00:18:09.775 "base_bdev": "3b2da08d-c738-4189-a320-bcd7dee96b25", 00:18:09.775 "cache": "nvc0n1p0" 00:18:09.775 } 00:18:09.775 } 00:18:09.775 } 00:18:09.775 ] 00:18:09.775 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:09.775 03:26:57 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:09.775 03:26:57 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:10.033 03:26:57 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:10.033 03:26:57 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:10.294 [2024-11-21 03:26:57.699836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.699875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.294 [2024-11-21 03:26:57.699886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.294 [2024-11-21 03:26:57.699894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.699933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.294 [2024-11-21 03:26:57.700363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.700375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.294 [2024-11-21 03:26:57.700388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:18:10.294 [2024-11-21 03:26:57.700394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.700804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.700821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.294 [2024-11-21 03:26:57.700830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:18:10.294 [2024-11-21 03:26:57.700836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.703273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.703291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.294 [2024-11-21 03:26:57.703300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:18:10.294 [2024-11-21 03:26:57.703308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.707911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.707932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.294 [2024-11-21 03:26:57.707941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:18:10.294 [2024-11-21 03:26:57.707953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.709433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.709536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.294 [2024-11-21 03:26:57.709550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:18:10.294 [2024-11-21 03:26:57.709556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.713720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.713837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.294 [2024-11-21 03:26:57.713884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:18:10.294 [2024-11-21 03:26:57.713936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.714432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.714494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.294 [2024-11-21 03:26:57.714527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:10.294 [2024-11-21 03:26:57.714549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.717046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.717122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.294 [2024-11-21 03:26:57.717152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:18:10.294 [2024-11-21 03:26:57.717173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.719283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.719358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.294 [2024-11-21 03:26:57.719388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:18:10.294 [2024-11-21 03:26:57.719409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.721277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.721352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.294 [2024-11-21 03:26:57.721386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:18:10.294 [2024-11-21 03:26:57.721410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.723268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.294 [2024-11-21 03:26:57.723340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.294 [2024-11-21 03:26:57.723370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:18:10.294 [2024-11-21 03:26:57.723390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.294 [2024-11-21 03:26:57.723475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.294 [2024-11-21 03:26:57.723513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.723991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.294 [2024-11-21 03:26:57.724735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.724997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.725986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.295 [2024-11-21 03:26:57.726600] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.295 [2024-11-21 03:26:57.726611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fe2f1eb-fe71-455d-9b94-5e2c9754202b 00:18:10.295 [2024-11-21 03:26:57.726619] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.295 [2024-11-21 03:26:57.726627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.295 [2024-11-21 03:26:57.726645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.295 [2024-11-21 03:26:57.726656] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.295 [2024-11-21 03:26:57.726663] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.295 [2024-11-21 03:26:57.726681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.295 [2024-11-21 03:26:57.726688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.295 [2024-11-21 03:26:57.726696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.295 [2024-11-21 03:26:57.726702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.295 [2024-11-21 03:26:57.726711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.295 [2024-11-21 03:26:57.726718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.295 [2024-11-21 03:26:57.726727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:18:10.295 [2024-11-21 03:26:57.726734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.295 [2024-11-21 03:26:57.728442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.295 [2024-11-21 03:26:57.728532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.296 [2024-11-21 03:26:57.728601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:18:10.296 [2024-11-21 03:26:57.728654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.728765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.296 [2024-11-21 03:26:57.728832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.296 [2024-11-21 03:26:57.728929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:10.296 [2024-11-21 03:26:57.728955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.734278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.734382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.296 [2024-11-21 03:26:57.734433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.734456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.734538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.734566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.296 [2024-11-21 03:26:57.734612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.734639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.734716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.734820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.296 [2024-11-21 03:26:57.734847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.734866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.734919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.735048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.296 [2024-11-21 03:26:57.735073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.735092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.744429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.744553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.296 [2024-11-21 03:26:57.744609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.744632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.752349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.752481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.296 [2024-11-21 03:26:57.752536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.752560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.752676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.752703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.296 [2024-11-21 03:26:57.752805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.752827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.752910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.752936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.296 [2024-11-21 03:26:57.752958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.753005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.753161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.753220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.296 [2024-11-21 03:26:57.753268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.753290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.753358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.753469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.296 [2024-11-21 03:26:57.753495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.753514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.753572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.753695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.296 [2024-11-21 03:26:57.753721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.753740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.753822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.296 [2024-11-21 03:26:57.753853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.296 [2024-11-21 03:26:57.753874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.296 [2024-11-21 03:26:57.753893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.296 [2024-11-21 03:26:57.754123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.237 ms, result 0 00:18:10.296 true 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87916 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87916 ']' 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87916 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87916 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.296 killing process with pid 87916 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87916' 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87916 00:18:10.296 03:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87916 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.588 03:27:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:15.588 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:15.588 fio-3.35 00:18:15.588 Starting 1 thread 00:18:19.795 00:18:19.795 test: (groupid=0, jobs=1): err= 0: pid=88074: Thu Nov 21 03:27:07 2024 00:18:19.795 read: IOPS=1128, BW=75.0MiB/s (78.6MB/s)(255MiB/3396msec) 00:18:19.795 slat (nsec): min=3004, max=23512, avg=4245.19, stdev=1994.15 00:18:19.795 clat (usec): min=237, max=1412, avg=404.32, stdev=152.86 00:18:19.795 lat (usec): min=241, max=1418, avg=408.57, stdev=153.50 00:18:19.795 clat percentiles (usec): 00:18:19.795 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 322], 00:18:19.795 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:18:19.795 | 70.00th=[ 424], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 799], 00:18:19.795 | 99.00th=[ 930], 99.50th=[ 1012], 99.90th=[ 1188], 99.95th=[ 1369], 00:18:19.795 | 99.99th=[ 1418] 00:18:19.795 write: IOPS=1136, BW=75.5MiB/s (79.1MB/s)(256MiB/3393msec); 0 zone resets 00:18:19.795 slat (nsec): min=13761, max=86051, avg=18026.66, stdev=3677.48 00:18:19.795 clat (usec): min=292, max=1560, avg=443.67, stdev=170.12 00:18:19.795 lat (usec): min=312, max=1579, avg=461.69, stdev=171.29 00:18:19.795 clat percentiles (usec): 00:18:19.795 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 343], 20.00th=[ 347], 00:18:19.795 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 363], 00:18:19.795 | 70.00th=[ 457], 80.00th=[ 553], 90.00th=[ 644], 95.00th=[ 881], 00:18:19.795 | 99.00th=[ 1012], 99.50th=[ 1139], 99.90th=[ 1254], 99.95th=[ 1287], 00:18:19.795 | 99.99th=[ 1565] 00:18:19.795 bw ( KiB/s): min=54400, max=93024, per=96.97%, avg=74935.33, stdev=16903.46, samples=6 00:18:19.795 iops : min= 800, max= 1368, avg=1101.83, stdev=248.64, samples=6 00:18:19.795 lat (usec) : 250=0.04%, 500=75.00%, 750=18.10%, 1000=6.01% 00:18:19.795 lat (msec) : 2=0.85% 00:18:19.796 cpu : usr=99.23%, sys=0.18%, ctx=6, majf=0, minf=1181 00:18:19.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.796 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.796 00:18:19.796 Run status group 0 (all jobs): 00:18:19.796 READ: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=255MiB (267MB), run=3396-3396msec 00:18:19.796 WRITE: bw=75.5MiB/s (79.1MB/s), 75.5MiB/s-75.5MiB/s (79.1MB/s-79.1MB/s), io=256MiB (269MB), run=3393-3393msec 00:18:20.369 ----------------------------------------------------- 00:18:20.369 Suppressions used: 00:18:20.369 count bytes template 00:18:20.369 1 5 /usr/src/fio/parse.c 00:18:20.369 1 8 libtcmalloc_minimal.so 00:18:20.369 1 904 libcrypto.so 00:18:20.369 ----------------------------------------------------- 00:18:20.369 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.369 03:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.630 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:20.630 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:20.630 fio-3.35 00:18:20.630 Starting 2 threads 00:18:47.254 00:18:47.254 first_half: (groupid=0, jobs=1): err= 0: pid=88160: Thu Nov 21 03:27:30 2024 00:18:47.254 read: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(255MiB/22280msec) 00:18:47.254 slat (nsec): min=3032, max=24648, avg=4339.80, stdev=998.59 00:18:47.254 clat (usec): min=552, max=257764, avg=33184.61, stdev=17006.37 00:18:47.254 lat (usec): min=556, max=257768, avg=33188.95, stdev=17006.43 00:18:47.254 clat percentiles (msec): 00:18:47.254 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 29], 20.00th=[ 30], 00:18:47.254 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:47.254 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 42], 00:18:47.254 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 199], 99.95th=[ 220], 00:18:47.254 | 99.99th=[ 251] 00:18:47.254 write: IOPS=3469, BW=13.6MiB/s (14.2MB/s)(256MiB/18890msec); 0 zone resets 00:18:47.254 slat (usec): min=3, max=223, avg= 5.90, stdev= 3.00 00:18:47.254 clat (usec): min=355, max=76421, avg=10436.21, stdev=16617.91 00:18:47.254 lat (usec): min=363, max=76426, avg=10442.11, stdev=16618.03 00:18:47.254 clat percentiles (usec): 00:18:47.254 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 816], 20.00th=[ 1156], 00:18:47.254 | 30.00th=[ 2900], 40.00th=[ 4621], 50.00th=[ 5211], 60.00th=[ 5538], 00:18:47.254 | 70.00th=[ 6325], 80.00th=[10421], 90.00th=[29754], 95.00th=[60031], 00:18:47.254 | 99.00th=[65274], 99.50th=[66847], 99.90th=[74974], 99.95th=[76022], 00:18:47.254 | 99.99th=[76022] 00:18:47.254 bw ( KiB/s): min= 336, max=41000, per=72.65%, avg=20164.92, stdev=12257.50, samples=26 00:18:47.254 iops : min= 84, max=10250, avg=5041.23, stdev=3064.38, samples=26 00:18:47.254 lat (usec) : 500=0.06%, 750=3.24%, 1000=4.75% 00:18:47.254 lat (msec) : 2=4.74%, 4=5.65%, 10=22.73%, 20=5.81%, 50=46.73% 00:18:47.254 lat (msec) : 100=5.46%, 250=0.84%, 500=0.01% 00:18:47.254 cpu : usr=99.30%, sys=0.10%, ctx=34, majf=0, minf=5575 00:18:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.254 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.254 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.254 second_half: (groupid=0, jobs=1): err= 0: pid=88161: Thu Nov 21 03:27:30 2024 00:18:47.254 read: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(254MiB/22083msec) 00:18:47.254 slat (nsec): min=2984, max=58787, avg=4905.78, stdev=1097.62 00:18:47.254 clat (usec): min=620, max=261448, avg=34040.26, stdev=16291.03 00:18:47.254 lat (usec): min=627, max=261454, avg=34045.16, stdev=16291.11 00:18:47.254 clat percentiles (msec): 00:18:47.254 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:18:47.254 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:47.254 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 43], 00:18:47.254 | 99.00th=[ 124], 99.50th=[ 146], 99.90th=[ 171], 99.95th=[ 190], 00:18:47.254 | 99.99th=[ 257] 00:18:47.254 write: IOPS=4661, BW=18.2MiB/s (19.1MB/s)(256MiB/14059msec); 0 zone resets 00:18:47.254 slat (usec): min=3, max=274, avg= 6.85, stdev= 3.00 00:18:47.254 clat (usec): min=385, max=76699, avg=9276.46, stdev=16292.51 00:18:47.254 lat (usec): min=397, max=76706, avg=9283.31, stdev=16292.65 00:18:47.254 clat percentiles (usec): 00:18:47.254 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 832], 20.00th=[ 1029], 00:18:47.254 | 30.00th=[ 1336], 40.00th=[ 2671], 50.00th=[ 3916], 60.00th=[ 5145], 00:18:47.254 | 70.00th=[ 5997], 80.00th=[10028], 90.00th=[14091], 95.00th=[59507], 00:18:47.254 | 99.00th=[65274], 99.50th=[66847], 99.90th=[74974], 99.95th=[76022], 00:18:47.254 | 99.99th=[76022] 00:18:47.254 bw ( KiB/s): min= 2488, max=41912, per=99.42%, avg=27593.68, stdev=13802.13, samples=19 00:18:47.254 iops : min= 622, max=10478, avg=6898.53, stdev=3450.59, samples=19 00:18:47.254 lat (usec) : 500=0.01%, 750=2.55%, 1000=6.89% 00:18:47.254 lat (msec) : 2=8.18%, 4=8.19%, 10=15.18%, 20=5.80%, 50=46.75% 00:18:47.254 lat (msec) : 100=5.61%, 250=0.84%, 500=0.01% 00:18:47.254 cpu : usr=99.33%, sys=0.12%, ctx=50, majf=0, minf=5569 00:18:47.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:47.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.255 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.255 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.255 00:18:47.255 Run status group 0 (all jobs): 00:18:47.255 READ: bw=22.9MiB/s (24.0MB/s), 11.4MiB/s-11.5MiB/s (12.0MB/s-12.1MB/s), io=509MiB (534MB), run=22083-22280msec 00:18:47.255 WRITE: bw=27.1MiB/s (28.4MB/s), 13.6MiB/s-18.2MiB/s (14.2MB/s-19.1MB/s), io=512MiB (537MB), run=14059-18890msec 00:18:47.255 ----------------------------------------------------- 00:18:47.255 Suppressions used: 00:18:47.255 count bytes template 00:18:47.255 2 10 /usr/src/fio/parse.c 00:18:47.255 2 192 /usr/src/fio/iolog.c 00:18:47.255 1 8 libtcmalloc_minimal.so 00:18:47.255 1 904 libcrypto.so 00:18:47.255 ----------------------------------------------------- 00:18:47.255 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.255 03:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:47.255 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:47.255 fio-3.35 00:18:47.255 Starting 1 thread 00:19:02.162 00:19:02.162 test: (groupid=0, jobs=1): err= 0: pid=88446: Thu Nov 21 03:27:47 2024 00:19:02.162 read: IOPS=7035, BW=27.5MiB/s (28.8MB/s)(255MiB/9267msec) 00:19:02.162 slat (usec): min=3, max=110, avg= 5.28, stdev= 1.96 00:19:02.162 clat (usec): min=1242, max=35883, avg=18182.04, stdev=2983.21 00:19:02.162 lat (usec): min=1248, max=35887, avg=18187.32, stdev=2983.66 00:19:02.162 clat percentiles (usec): 00:19:02.162 | 1.00th=[14877], 5.00th=[15270], 10.00th=[15533], 20.00th=[15926], 00:19:02.162 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17433], 60.00th=[17957], 00:19:02.162 | 70.00th=[18744], 80.00th=[19792], 90.00th=[21890], 95.00th=[24773], 00:19:02.162 | 99.00th=[28967], 99.50th=[31327], 99.90th=[32900], 99.95th=[33424], 00:19:02.162 | 99.99th=[35390] 00:19:02.162 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(256MiB/5537msec); 0 zone resets 00:19:02.162 slat (usec): min=4, max=134, avg= 6.79, stdev= 2.90 00:19:02.162 clat (usec): min=525, max=78668, avg=10766.41, stdev=13302.44 00:19:02.162 lat (usec): min=531, max=78679, avg=10773.20, stdev=13302.56 00:19:02.162 clat percentiles (usec): 00:19:02.162 | 1.00th=[ 766], 5.00th=[ 996], 10.00th=[ 1139], 20.00th=[ 1336], 00:19:02.162 | 30.00th=[ 1582], 40.00th=[ 2212], 50.00th=[ 7439], 60.00th=[ 8848], 00:19:02.162 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[36963], 95.00th=[39584], 00:19:02.162 | 99.00th=[51119], 99.50th=[53216], 99.90th=[60031], 99.95th=[65274], 00:19:02.162 | 99.99th=[73925] 00:19:02.163 bw ( KiB/s): min= 1168, max=60200, per=92.28%, avg=43690.67, stdev=15389.38, samples=12 00:19:02.163 iops : min= 292, max=15050, avg=10922.67, stdev=3847.34, samples=12 00:19:02.163 lat (usec) : 750=0.42%, 1000=2.21% 00:19:02.163 lat (msec) : 2=16.47%, 4=1.95%, 10=15.01%, 20=46.48%, 50=16.77% 00:19:02.163 lat (msec) : 100=0.70% 00:19:02.163 cpu : usr=99.03%, sys=0.19%, ctx=33, majf=0, minf=5577 00:19:02.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:02.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.163 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.163 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.163 00:19:02.163 Run status group 0 (all jobs): 00:19:02.163 READ: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=255MiB (267MB), run=9267-9267msec 00:19:02.163 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=256MiB (268MB), run=5537-5537msec 00:19:02.163 ----------------------------------------------------- 00:19:02.163 Suppressions used: 00:19:02.163 count bytes template 00:19:02.163 1 5 /usr/src/fio/parse.c 00:19:02.163 2 192 /usr/src/fio/iolog.c 00:19:02.163 1 8 libtcmalloc_minimal.so 00:19:02.163 1 904 libcrypto.so 00:19:02.163 ----------------------------------------------------- 00:19:02.163 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:02.163 Remove shared memory files 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70917 /dev/shm/spdk_tgt_trace.pid86857 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:02.163 ************************************ 00:19:02.163 END TEST ftl_fio_basic 00:19:02.163 ************************************ 00:19:02.163 00:19:02.163 real 0m57.614s 00:19:02.163 user 2m3.809s 00:19:02.163 sys 0m2.799s 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.163 03:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 03:27:48 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:02.163 03:27:48 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:02.163 03:27:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.163 03:27:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 ************************************ 00:19:02.163 START TEST ftl_bdevperf 00:19:02.163 ************************************ 00:19:02.163 03:27:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:02.163 * Looking for test storage... 00:19:02.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.163 --rc genhtml_branch_coverage=1 00:19:02.163 --rc genhtml_function_coverage=1 00:19:02.163 --rc genhtml_legend=1 00:19:02.163 --rc geninfo_all_blocks=1 00:19:02.163 --rc geninfo_unexecuted_blocks=1 00:19:02.163 00:19:02.163 ' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.163 --rc genhtml_branch_coverage=1 00:19:02.163 --rc genhtml_function_coverage=1 00:19:02.163 --rc genhtml_legend=1 00:19:02.163 --rc geninfo_all_blocks=1 00:19:02.163 --rc geninfo_unexecuted_blocks=1 00:19:02.163 00:19:02.163 ' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.163 --rc genhtml_branch_coverage=1 00:19:02.163 --rc genhtml_function_coverage=1 00:19:02.163 --rc genhtml_legend=1 00:19:02.163 --rc geninfo_all_blocks=1 00:19:02.163 --rc geninfo_unexecuted_blocks=1 00:19:02.163 00:19:02.163 ' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.163 --rc genhtml_branch_coverage=1 00:19:02.163 --rc genhtml_function_coverage=1 00:19:02.163 --rc genhtml_legend=1 00:19:02.163 --rc geninfo_all_blocks=1 00:19:02.163 --rc geninfo_unexecuted_blocks=1 00:19:02.163 00:19:02.163 ' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:02.163 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88690 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88690 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88690 ']' 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.164 03:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.164 [2024-11-21 03:27:49.190296] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:19:02.164 [2024-11-21 03:27:49.191343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88690 ] 00:19:02.164 [2024-11-21 03:27:49.331992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.164 [2024-11-21 03:27:49.363283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.164 [2024-11-21 03:27:49.392575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:02.734 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:02.994 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.255 { 00:19:03.255 "name": "nvme0n1", 00:19:03.255 "aliases": [ 00:19:03.255 "9c54867c-2dda-4308-9d91-356a6747fc5a" 00:19:03.255 ], 00:19:03.255 "product_name": "NVMe disk", 00:19:03.255 "block_size": 4096, 00:19:03.255 "num_blocks": 1310720, 00:19:03.255 "uuid": "9c54867c-2dda-4308-9d91-356a6747fc5a", 00:19:03.255 "numa_id": -1, 00:19:03.255 "assigned_rate_limits": { 00:19:03.255 "rw_ios_per_sec": 0, 00:19:03.255 "rw_mbytes_per_sec": 0, 00:19:03.255 "r_mbytes_per_sec": 0, 00:19:03.255 "w_mbytes_per_sec": 0 00:19:03.255 }, 00:19:03.255 "claimed": true, 00:19:03.255 "claim_type": "read_many_write_one", 00:19:03.255 "zoned": false, 00:19:03.255 "supported_io_types": { 00:19:03.255 "read": true, 00:19:03.255 "write": true, 00:19:03.255 "unmap": true, 00:19:03.255 "flush": true, 00:19:03.255 "reset": true, 00:19:03.255 "nvme_admin": true, 00:19:03.255 "nvme_io": true, 00:19:03.255 "nvme_io_md": false, 00:19:03.255 "write_zeroes": true, 00:19:03.255 "zcopy": false, 00:19:03.255 "get_zone_info": false, 00:19:03.255 "zone_management": false, 00:19:03.255 "zone_append": false, 00:19:03.255 "compare": true, 00:19:03.255 "compare_and_write": false, 00:19:03.255 "abort": true, 00:19:03.255 "seek_hole": false, 00:19:03.255 "seek_data": false, 00:19:03.255 "copy": true, 00:19:03.255 "nvme_iov_md": false 00:19:03.255 }, 00:19:03.255 "driver_specific": { 00:19:03.255 "nvme": [ 00:19:03.255 { 00:19:03.255 "pci_address": "0000:00:11.0", 00:19:03.255 "trid": { 00:19:03.255 "trtype": "PCIe", 00:19:03.255 "traddr": "0000:00:11.0" 00:19:03.255 }, 00:19:03.255 "ctrlr_data": { 00:19:03.255 "cntlid": 0, 00:19:03.255 "vendor_id": "0x1b36", 00:19:03.255 "model_number": "QEMU NVMe Ctrl", 00:19:03.255 "serial_number": "12341", 00:19:03.255 "firmware_revision": "8.0.0", 00:19:03.255 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:03.255 "oacs": { 00:19:03.255 "security": 0, 00:19:03.255 "format": 1, 00:19:03.255 "firmware": 0, 00:19:03.255 "ns_manage": 1 00:19:03.255 }, 00:19:03.255 "multi_ctrlr": false, 00:19:03.255 "ana_reporting": false 00:19:03.255 }, 00:19:03.255 "vs": { 00:19:03.255 "nvme_version": "1.4" 00:19:03.255 }, 00:19:03.255 "ns_data": { 00:19:03.255 "id": 1, 00:19:03.255 "can_share": false 00:19:03.255 } 00:19:03.255 } 00:19:03.255 ], 00:19:03.255 "mp_policy": "active_passive" 00:19:03.255 } 00:19:03.255 } 00:19:03.255 ]' 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:03.255 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:03.516 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d07412b2-2f4f-499c-b334-a66ac18ec7e3 00:19:03.516 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:03.516 03:27:50 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d07412b2-2f4f-499c-b334-a66ac18ec7e3 00:19:03.777 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:03.777 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=81eddf66-6d86-422b-ab89-ffd25c558de4 00:19:03.777 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 81eddf66-6d86-422b-ab89-ffd25c558de4 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.037 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.298 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.298 { 00:19:04.298 "name": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:04.298 "aliases": [ 00:19:04.298 "lvs/nvme0n1p0" 00:19:04.298 ], 00:19:04.298 "product_name": "Logical Volume", 00:19:04.298 "block_size": 4096, 00:19:04.298 "num_blocks": 26476544, 00:19:04.298 "uuid": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:04.298 "assigned_rate_limits": { 00:19:04.298 "rw_ios_per_sec": 0, 00:19:04.298 "rw_mbytes_per_sec": 0, 00:19:04.298 "r_mbytes_per_sec": 0, 00:19:04.298 "w_mbytes_per_sec": 0 00:19:04.298 }, 00:19:04.298 "claimed": false, 00:19:04.298 "zoned": false, 00:19:04.298 "supported_io_types": { 00:19:04.298 "read": true, 00:19:04.298 "write": true, 00:19:04.298 "unmap": true, 00:19:04.298 "flush": false, 00:19:04.298 "reset": true, 00:19:04.298 "nvme_admin": false, 00:19:04.298 "nvme_io": false, 00:19:04.298 "nvme_io_md": false, 00:19:04.298 "write_zeroes": true, 00:19:04.298 "zcopy": false, 00:19:04.298 "get_zone_info": false, 00:19:04.298 "zone_management": false, 00:19:04.298 "zone_append": false, 00:19:04.298 "compare": false, 00:19:04.298 "compare_and_write": false, 00:19:04.298 "abort": false, 00:19:04.298 "seek_hole": true, 00:19:04.298 "seek_data": true, 00:19:04.298 "copy": false, 00:19:04.298 "nvme_iov_md": false 00:19:04.298 }, 00:19:04.298 "driver_specific": { 00:19:04.298 "lvol": { 00:19:04.298 "lvol_store_uuid": "81eddf66-6d86-422b-ab89-ffd25c558de4", 00:19:04.298 "base_bdev": "nvme0n1", 00:19:04.298 "thin_provision": true, 00:19:04.298 "num_allocated_clusters": 0, 00:19:04.298 "snapshot": false, 00:19:04.298 "clone": false, 00:19:04.298 "esnap_clone": false 00:19:04.298 } 00:19:04.298 } 00:19:04.298 } 00:19:04.298 ]' 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:04.299 03:27:51 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.559 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.819 { 00:19:04.819 "name": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:04.819 "aliases": [ 00:19:04.819 "lvs/nvme0n1p0" 00:19:04.819 ], 00:19:04.819 "product_name": "Logical Volume", 00:19:04.819 "block_size": 4096, 00:19:04.819 "num_blocks": 26476544, 00:19:04.819 "uuid": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:04.819 "assigned_rate_limits": { 00:19:04.819 "rw_ios_per_sec": 0, 00:19:04.819 "rw_mbytes_per_sec": 0, 00:19:04.819 "r_mbytes_per_sec": 0, 00:19:04.819 "w_mbytes_per_sec": 0 00:19:04.819 }, 00:19:04.819 "claimed": false, 00:19:04.819 "zoned": false, 00:19:04.819 "supported_io_types": { 00:19:04.819 "read": true, 00:19:04.819 "write": true, 00:19:04.819 "unmap": true, 00:19:04.819 "flush": false, 00:19:04.819 "reset": true, 00:19:04.819 "nvme_admin": false, 00:19:04.819 "nvme_io": false, 00:19:04.819 "nvme_io_md": false, 00:19:04.819 "write_zeroes": true, 00:19:04.819 "zcopy": false, 00:19:04.819 "get_zone_info": false, 00:19:04.819 "zone_management": false, 00:19:04.819 "zone_append": false, 00:19:04.819 "compare": false, 00:19:04.819 "compare_and_write": false, 00:19:04.819 "abort": false, 00:19:04.819 "seek_hole": true, 00:19:04.819 "seek_data": true, 00:19:04.819 "copy": false, 00:19:04.819 "nvme_iov_md": false 00:19:04.819 }, 00:19:04.819 "driver_specific": { 00:19:04.819 "lvol": { 00:19:04.819 "lvol_store_uuid": "81eddf66-6d86-422b-ab89-ffd25c558de4", 00:19:04.819 "base_bdev": "nvme0n1", 00:19:04.819 "thin_provision": true, 00:19:04.819 "num_allocated_clusters": 0, 00:19:04.819 "snapshot": false, 00:19:04.819 "clone": false, 00:19:04.819 "esnap_clone": false 00:19:04.819 } 00:19:04.819 } 00:19:04.819 } 00:19:04.819 ]' 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:04.819 03:27:52 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:05.080 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b03cab03-2117-4fe4-b19a-b91b596bbcd3 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.340 { 00:19:05.340 "name": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:05.340 "aliases": [ 00:19:05.340 "lvs/nvme0n1p0" 00:19:05.340 ], 00:19:05.340 "product_name": "Logical Volume", 00:19:05.340 "block_size": 4096, 00:19:05.340 "num_blocks": 26476544, 00:19:05.340 "uuid": "b03cab03-2117-4fe4-b19a-b91b596bbcd3", 00:19:05.340 "assigned_rate_limits": { 00:19:05.340 "rw_ios_per_sec": 0, 00:19:05.340 "rw_mbytes_per_sec": 0, 00:19:05.340 "r_mbytes_per_sec": 0, 00:19:05.340 "w_mbytes_per_sec": 0 00:19:05.340 }, 00:19:05.340 "claimed": false, 00:19:05.340 "zoned": false, 00:19:05.340 "supported_io_types": { 00:19:05.340 "read": true, 00:19:05.340 "write": true, 00:19:05.340 "unmap": true, 00:19:05.340 "flush": false, 00:19:05.340 "reset": true, 00:19:05.340 "nvme_admin": false, 00:19:05.340 "nvme_io": false, 00:19:05.340 "nvme_io_md": false, 00:19:05.340 "write_zeroes": true, 00:19:05.340 "zcopy": false, 00:19:05.340 "get_zone_info": false, 00:19:05.340 "zone_management": false, 00:19:05.340 "zone_append": false, 00:19:05.340 "compare": false, 00:19:05.340 "compare_and_write": false, 00:19:05.340 "abort": false, 00:19:05.340 "seek_hole": true, 00:19:05.340 "seek_data": true, 00:19:05.340 "copy": false, 00:19:05.340 "nvme_iov_md": false 00:19:05.340 }, 00:19:05.340 "driver_specific": { 00:19:05.340 "lvol": { 00:19:05.340 "lvol_store_uuid": "81eddf66-6d86-422b-ab89-ffd25c558de4", 00:19:05.340 "base_bdev": "nvme0n1", 00:19:05.340 "thin_provision": true, 00:19:05.340 "num_allocated_clusters": 0, 00:19:05.340 "snapshot": false, 00:19:05.340 "clone": false, 00:19:05.340 "esnap_clone": false 00:19:05.340 } 00:19:05.340 } 00:19:05.340 } 00:19:05.340 ]' 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:05.340 03:27:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b03cab03-2117-4fe4-b19a-b91b596bbcd3 -c nvc0n1p0 --l2p_dram_limit 20 00:19:05.602 [2024-11-21 03:27:52.923319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.923358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:05.602 [2024-11-21 03:27:52.923369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:05.602 [2024-11-21 03:27:52.923379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.923417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.923427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.602 [2024-11-21 03:27:52.923433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:05.602 [2024-11-21 03:27:52.923440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.923457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:05.602 [2024-11-21 03:27:52.923658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:05.602 [2024-11-21 03:27:52.923672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.923680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.602 [2024-11-21 03:27:52.923688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:19:05.602 [2024-11-21 03:27:52.923695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.923719] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7faa9b9e-05da-4e0b-a97c-3cf8e8e965c0 00:19:05.602 [2024-11-21 03:27:52.924836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.924929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:05.602 [2024-11-21 03:27:52.924980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:05.602 [2024-11-21 03:27:52.925003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.929845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.929947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.602 [2024-11-21 03:27:52.930000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:19:05.602 [2024-11-21 03:27:52.930023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.930178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.930203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.602 [2024-11-21 03:27:52.930223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:05.602 [2024-11-21 03:27:52.930268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.930316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.930337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:05.602 [2024-11-21 03:27:52.930354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:05.602 [2024-11-21 03:27:52.930393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.930460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.602 [2024-11-21 03:27:52.931771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.931795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.602 [2024-11-21 03:27:52.931806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:19:05.602 [2024-11-21 03:27:52.931814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.931837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.931847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:05.602 [2024-11-21 03:27:52.931854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:05.602 [2024-11-21 03:27:52.931861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.931873] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:05.602 [2024-11-21 03:27:52.931993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:05.602 [2024-11-21 03:27:52.932004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:05.602 [2024-11-21 03:27:52.932015] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:05.602 [2024-11-21 03:27:52.932023] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:05.602 [2024-11-21 03:27:52.932046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:05.602 [2024-11-21 03:27:52.932053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:05.602 [2024-11-21 03:27:52.932063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:05.602 [2024-11-21 03:27:52.932070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.932077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:05.602 [2024-11-21 03:27:52.932084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:05.602 [2024-11-21 03:27:52.932093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.932154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.602 [2024-11-21 03:27:52.932162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:05.602 [2024-11-21 03:27:52.932168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:05.602 [2024-11-21 03:27:52.932174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.602 [2024-11-21 03:27:52.932240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:05.602 [2024-11-21 03:27:52.932254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:05.602 [2024-11-21 03:27:52.932260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:05.602 [2024-11-21 03:27:52.932280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:05.602 [2024-11-21 03:27:52.932302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.602 [2024-11-21 03:27:52.932313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:05.602 [2024-11-21 03:27:52.932329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:05.602 [2024-11-21 03:27:52.932334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.602 [2024-11-21 03:27:52.932341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:05.602 [2024-11-21 03:27:52.932346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:05.602 [2024-11-21 03:27:52.932353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:05.602 [2024-11-21 03:27:52.932363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:05.602 [2024-11-21 03:27:52.932380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:05.602 [2024-11-21 03:27:52.932398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:05.602 [2024-11-21 03:27:52.932414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.602 [2024-11-21 03:27:52.932425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:05.602 [2024-11-21 03:27:52.932431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:05.602 [2024-11-21 03:27:52.932436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.603 [2024-11-21 03:27:52.932442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:05.603 [2024-11-21 03:27:52.932446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:05.603 [2024-11-21 03:27:52.932452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.603 [2024-11-21 03:27:52.932458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:05.603 [2024-11-21 03:27:52.932463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:05.603 [2024-11-21 03:27:52.932468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.603 [2024-11-21 03:27:52.932475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:05.603 [2024-11-21 03:27:52.932480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:05.603 [2024-11-21 03:27:52.932486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.603 [2024-11-21 03:27:52.932490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:05.603 [2024-11-21 03:27:52.932497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:05.603 [2024-11-21 03:27:52.932502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.603 [2024-11-21 03:27:52.932512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:05.603 [2024-11-21 03:27:52.932518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:05.603 [2024-11-21 03:27:52.932525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.603 [2024-11-21 03:27:52.932530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.603 [2024-11-21 03:27:52.932536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:05.603 [2024-11-21 03:27:52.932541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:05.603 [2024-11-21 03:27:52.932548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:05.603 [2024-11-21 03:27:52.932553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:05.603 [2024-11-21 03:27:52.932559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:05.603 [2024-11-21 03:27:52.932563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:05.603 [2024-11-21 03:27:52.932572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:05.603 [2024-11-21 03:27:52.932580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:05.603 [2024-11-21 03:27:52.932592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:05.603 [2024-11-21 03:27:52.932599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:05.603 [2024-11-21 03:27:52.932604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:05.603 [2024-11-21 03:27:52.932611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:05.603 [2024-11-21 03:27:52.932616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:05.603 [2024-11-21 03:27:52.932622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:05.603 [2024-11-21 03:27:52.932627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:05.603 [2024-11-21 03:27:52.932635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:05.603 [2024-11-21 03:27:52.932640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:05.603 [2024-11-21 03:27:52.932671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:05.603 [2024-11-21 03:27:52.932678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:05.603 [2024-11-21 03:27:52.932691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:05.603 [2024-11-21 03:27:52.932698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:05.603 [2024-11-21 03:27:52.932718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:05.603 [2024-11-21 03:27:52.932729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.603 [2024-11-21 03:27:52.932737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:05.603 [2024-11-21 03:27:52.932743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:19:05.603 [2024-11-21 03:27:52.932752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.603 [2024-11-21 03:27:52.932776] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:05.603 [2024-11-21 03:27:52.932783] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:09.808 [2024-11-21 03:27:56.713189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.713275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:09.808 [2024-11-21 03:27:56.713299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3780.391 ms 00:19:09.808 [2024-11-21 03:27:56.713308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.727218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.727466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.808 [2024-11-21 03:27:56.727497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.792 ms 00:19:09.808 [2024-11-21 03:27:56.727507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.727626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.727636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:09.808 [2024-11-21 03:27:56.727652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:09.808 [2024-11-21 03:27:56.727661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.747930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.747983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.808 [2024-11-21 03:27:56.747999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.214 ms 00:19:09.808 [2024-11-21 03:27:56.748013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.748056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.748069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.808 [2024-11-21 03:27:56.748082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:09.808 [2024-11-21 03:27:56.748090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.748673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.748699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.808 [2024-11-21 03:27:56.748717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:19:09.808 [2024-11-21 03:27:56.748726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.748852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.748872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.808 [2024-11-21 03:27:56.748887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:09.808 [2024-11-21 03:27:56.748925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.757481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.757527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.808 [2024-11-21 03:27:56.757547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.530 ms 00:19:09.808 [2024-11-21 03:27:56.757557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.768249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:09.808 [2024-11-21 03:27:56.776221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.776269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:09.808 [2024-11-21 03:27:56.776281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.592 ms 00:19:09.808 [2024-11-21 03:27:56.776299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.872514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.872582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:09.808 [2024-11-21 03:27:56.872601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.184 ms 00:19:09.808 [2024-11-21 03:27:56.872613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.872817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.872832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:09.808 [2024-11-21 03:27:56.872841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:09.808 [2024-11-21 03:27:56.872858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.878948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.879145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:09.808 [2024-11-21 03:27:56.879166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.057 ms 00:19:09.808 [2024-11-21 03:27:56.879178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.884240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.884294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:09.808 [2024-11-21 03:27:56.884307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.018 ms 00:19:09.808 [2024-11-21 03:27:56.884317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.884663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.884680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:09.808 [2024-11-21 03:27:56.884690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:09.808 [2024-11-21 03:27:56.884701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.930441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.930647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:09.808 [2024-11-21 03:27:56.930668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.720 ms 00:19:09.808 [2024-11-21 03:27:56.930679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.937751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.808 [2024-11-21 03:27:56.937961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:09.808 [2024-11-21 03:27:56.937986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.935 ms 00:19:09.808 [2024-11-21 03:27:56.938014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.808 [2024-11-21 03:27:56.944068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.809 [2024-11-21 03:27:56.944134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:09.809 [2024-11-21 03:27:56.944147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.749 ms 00:19:09.809 [2024-11-21 03:27:56.944157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.809 [2024-11-21 03:27:56.950137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.809 [2024-11-21 03:27:56.950196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:09.809 [2024-11-21 03:27:56.950207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.933 ms 00:19:09.809 [2024-11-21 03:27:56.950218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.809 [2024-11-21 03:27:56.950267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.809 [2024-11-21 03:27:56.950284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:09.809 [2024-11-21 03:27:56.950294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:09.809 [2024-11-21 03:27:56.950305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.809 [2024-11-21 03:27:56.950376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.809 [2024-11-21 03:27:56.950389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:09.809 [2024-11-21 03:27:56.950399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:09.809 [2024-11-21 03:27:56.950414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.809 [2024-11-21 03:27:56.951527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4027.747 ms, result 0 00:19:09.809 { 00:19:09.809 "name": "ftl0", 00:19:09.809 "uuid": "7faa9b9e-05da-4e0b-a97c-3cf8e8e965c0" 00:19:09.809 } 00:19:09.809 03:27:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:09.809 03:27:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:09.809 03:27:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:09.809 03:27:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:09.809 [2024-11-21 03:27:57.287598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:09.809 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:09.809 Zero copy mechanism will not be used. 00:19:09.809 Running I/O for 4 seconds... 00:19:12.141 678.00 IOPS, 45.02 MiB/s [2024-11-21T03:28:00.649Z] 728.50 IOPS, 48.38 MiB/s [2024-11-21T03:28:01.592Z] 870.67 IOPS, 57.82 MiB/s [2024-11-21T03:28:01.592Z] 830.00 IOPS, 55.12 MiB/s 00:19:14.027 Latency(us) 00:19:14.027 [2024-11-21T03:28:01.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.027 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:14.027 ftl0 : 4.00 829.71 55.10 0.00 0.00 1271.35 178.81 3327.21 00:19:14.027 [2024-11-21T03:28:01.592Z] =================================================================================================================== 00:19:14.027 [2024-11-21T03:28:01.592Z] Total : 829.71 55.10 0.00 0.00 1271.35 178.81 3327.21 00:19:14.027 { 00:19:14.027 "results": [ 00:19:14.027 { 00:19:14.027 "job": "ftl0", 00:19:14.027 "core_mask": "0x1", 00:19:14.027 "workload": "randwrite", 00:19:14.027 "status": "finished", 00:19:14.027 "queue_depth": 1, 00:19:14.027 "io_size": 69632, 00:19:14.027 "runtime": 4.002622, 00:19:14.027 "iops": 829.7061276333363, 00:19:14.027 "mibps": 55.097672538151244, 00:19:14.027 "io_failed": 0, 00:19:14.027 "io_timeout": 0, 00:19:14.027 "avg_latency_us": 1271.345227804415, 00:19:14.027 "min_latency_us": 178.80615384615385, 00:19:14.027 "max_latency_us": 3327.2123076923076 00:19:14.027 } 00:19:14.027 ], 00:19:14.027 "core_count": 1 00:19:14.027 } 00:19:14.027 [2024-11-21 03:28:01.297173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:14.027 03:28:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:14.027 [2024-11-21 03:28:01.413966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:14.027 Running I/O for 4 seconds... 00:19:15.915 6725.00 IOPS, 26.27 MiB/s [2024-11-21T03:28:04.424Z] 6510.00 IOPS, 25.43 MiB/s [2024-11-21T03:28:05.810Z] 6209.67 IOPS, 24.26 MiB/s [2024-11-21T03:28:05.810Z] 5909.25 IOPS, 23.08 MiB/s 00:19:18.245 Latency(us) 00:19:18.245 [2024-11-21T03:28:05.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.245 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:18.245 ftl0 : 4.03 5891.34 23.01 0.00 0.00 21630.63 256.79 43959.53 00:19:18.245 [2024-11-21T03:28:05.810Z] =================================================================================================================== 00:19:18.245 [2024-11-21T03:28:05.811Z] Total : 5891.34 23.01 0.00 0.00 21630.63 0.00 43959.53 00:19:18.246 [2024-11-21 03:28:05.454891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:18.246 { 00:19:18.246 "results": [ 00:19:18.246 { 00:19:18.246 "job": "ftl0", 00:19:18.246 "core_mask": "0x1", 00:19:18.246 "workload": "randwrite", 00:19:18.246 "status": "finished", 00:19:18.246 "queue_depth": 128, 00:19:18.246 "io_size": 4096, 00:19:18.246 "runtime": 4.033887, 00:19:18.246 "iops": 5891.340040015994, 00:19:18.246 "mibps": 23.013047031312478, 00:19:18.246 "io_failed": 0, 00:19:18.246 "io_timeout": 0, 00:19:18.246 "avg_latency_us": 21630.631357685023, 00:19:18.246 "min_latency_us": 256.7876923076923, 00:19:18.246 "max_latency_us": 43959.53230769231 00:19:18.246 } 00:19:18.246 ], 00:19:18.246 "core_count": 1 00:19:18.246 } 00:19:18.246 03:28:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:18.246 [2024-11-21 03:28:05.573378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:18.246 Running I/O for 4 seconds... 00:19:20.133 4800.00 IOPS, 18.75 MiB/s [2024-11-21T03:28:08.641Z] 4917.00 IOPS, 19.21 MiB/s [2024-11-21T03:28:09.588Z] 4986.67 IOPS, 19.48 MiB/s [2024-11-21T03:28:09.849Z] 4951.00 IOPS, 19.34 MiB/s 00:19:22.284 Latency(us) 00:19:22.284 [2024-11-21T03:28:09.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.284 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:22.284 Verification LBA range: start 0x0 length 0x1400000 00:19:22.284 ftl0 : 4.02 4964.05 19.39 0.00 0.00 25707.78 307.20 39926.55 00:19:22.284 [2024-11-21T03:28:09.849Z] =================================================================================================================== 00:19:22.284 [2024-11-21T03:28:09.849Z] Total : 4964.05 19.39 0.00 0.00 25707.78 0.00 39926.55 00:19:22.284 [2024-11-21 03:28:09.597235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:22.284 { 00:19:22.284 "results": [ 00:19:22.284 { 00:19:22.284 "job": "ftl0", 00:19:22.284 "core_mask": "0x1", 00:19:22.284 "workload": "verify", 00:19:22.284 "status": "finished", 00:19:22.284 "verify_range": { 00:19:22.284 "start": 0, 00:19:22.284 "length": 20971520 00:19:22.284 }, 00:19:22.284 "queue_depth": 128, 00:19:22.284 "io_size": 4096, 00:19:22.284 "runtime": 4.015273, 00:19:22.285 "iops": 4964.046031241213, 00:19:22.285 "mibps": 19.39080480953599, 00:19:22.285 "io_failed": 0, 00:19:22.285 "io_timeout": 0, 00:19:22.285 "avg_latency_us": 25707.778612204573, 00:19:22.285 "min_latency_us": 307.2, 00:19:22.285 "max_latency_us": 39926.54769230769 00:19:22.285 } 00:19:22.285 ], 00:19:22.285 "core_count": 1 00:19:22.285 } 00:19:22.285 03:28:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:22.285 [2024-11-21 03:28:09.817589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.285 [2024-11-21 03:28:09.817658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.285 [2024-11-21 03:28:09.817676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.285 [2024-11-21 03:28:09.817687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.285 [2024-11-21 03:28:09.817712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.285 [2024-11-21 03:28:09.818456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.285 [2024-11-21 03:28:09.818490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.285 [2024-11-21 03:28:09.818505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:19:22.285 [2024-11-21 03:28:09.818516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.285 [2024-11-21 03:28:09.821550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.285 [2024-11-21 03:28:09.821598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.285 [2024-11-21 03:28:09.821616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:19:22.285 [2024-11-21 03:28:09.821624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.041024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.041275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.548 [2024-11-21 03:28:10.041311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 219.372 ms 00:19:22.548 [2024-11-21 03:28:10.041321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.047585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.047631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.548 [2024-11-21 03:28:10.047656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.215 ms 00:19:22.548 [2024-11-21 03:28:10.047667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.050912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.050957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.548 [2024-11-21 03:28:10.050975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:19:22.548 [2024-11-21 03:28:10.050984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.058043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.058102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.548 [2024-11-21 03:28:10.058120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.007 ms 00:19:22.548 [2024-11-21 03:28:10.058129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.058270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.058282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.548 [2024-11-21 03:28:10.058294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:22.548 [2024-11-21 03:28:10.058303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.061714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.061761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.548 [2024-11-21 03:28:10.061774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.381 ms 00:19:22.548 [2024-11-21 03:28:10.061782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.064217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.064387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.548 [2024-11-21 03:28:10.064411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.385 ms 00:19:22.548 [2024-11-21 03:28:10.064418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.067019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.067067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.548 [2024-11-21 03:28:10.067082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:19:22.548 [2024-11-21 03:28:10.067090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.069331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.548 [2024-11-21 03:28:10.069377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.548 [2024-11-21 03:28:10.069393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:19:22.548 [2024-11-21 03:28:10.069401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.548 [2024-11-21 03:28:10.069445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.548 [2024-11-21 03:28:10.069462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.069986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.070019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.070027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.548 [2024-11-21 03:28:10.070039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.549 [2024-11-21 03:28:10.070491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.549 [2024-11-21 03:28:10.070504] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7faa9b9e-05da-4e0b-a97c-3cf8e8e965c0 00:19:22.549 [2024-11-21 03:28:10.070512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.549 [2024-11-21 03:28:10.070524] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.549 [2024-11-21 03:28:10.070532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.549 [2024-11-21 03:28:10.070549] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.549 [2024-11-21 03:28:10.070555] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.549 [2024-11-21 03:28:10.070566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.549 [2024-11-21 03:28:10.070573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.549 [2024-11-21 03:28:10.070583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.549 [2024-11-21 03:28:10.070590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.549 [2024-11-21 03:28:10.070599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.549 [2024-11-21 03:28:10.070610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.549 [2024-11-21 03:28:10.070623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:19:22.549 [2024-11-21 03:28:10.070634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.072922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.549 [2024-11-21 03:28:10.072951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.549 [2024-11-21 03:28:10.072965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:19:22.549 [2024-11-21 03:28:10.072973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.073099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.549 [2024-11-21 03:28:10.073110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.549 [2024-11-21 03:28:10.073127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:22.549 [2024-11-21 03:28:10.073135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.081663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.081831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.549 [2024-11-21 03:28:10.081893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.081932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.082041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.082116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.549 [2024-11-21 03:28:10.082149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.082170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.082300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.082348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.549 [2024-11-21 03:28:10.082373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.082391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.082425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.082447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.549 [2024-11-21 03:28:10.082472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.082494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.096025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.096206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.549 [2024-11-21 03:28:10.096265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.096288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.107436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.107612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.549 [2024-11-21 03:28:10.107677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.107700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.107789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.107815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.549 [2024-11-21 03:28:10.107838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.107858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.107957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.549 [2024-11-21 03:28:10.108052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.549 [2024-11-21 03:28:10.108084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.549 [2024-11-21 03:28:10.108104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.549 [2024-11-21 03:28:10.108212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.550 [2024-11-21 03:28:10.108239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.550 [2024-11-21 03:28:10.108356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.550 [2024-11-21 03:28:10.108376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.550 [2024-11-21 03:28:10.108430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.550 [2024-11-21 03:28:10.108501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.550 [2024-11-21 03:28:10.108529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.550 [2024-11-21 03:28:10.108551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.550 [2024-11-21 03:28:10.108615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.550 [2024-11-21 03:28:10.108639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.550 [2024-11-21 03:28:10.108661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.550 [2024-11-21 03:28:10.108734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.550 [2024-11-21 03:28:10.108809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.550 [2024-11-21 03:28:10.108835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.550 [2024-11-21 03:28:10.108862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.550 [2024-11-21 03:28:10.108882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.811 [2024-11-21 03:28:10.109139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 291.482 ms, result 0 00:19:22.811 true 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88690 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88690 ']' 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88690 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88690 00:19:22.811 killing process with pid 88690 00:19:22.811 Received shutdown signal, test time was about 4.000000 seconds 00:19:22.811 00:19:22.811 Latency(us) 00:19:22.811 [2024-11-21T03:28:10.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.811 [2024-11-21T03:28:10.376Z] =================================================================================================================== 00:19:22.811 [2024-11-21T03:28:10.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88690' 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88690 00:19:22.811 03:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88690 00:19:28.176 Remove shared memory files 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:28.176 ************************************ 00:19:28.176 END TEST ftl_bdevperf 00:19:28.176 ************************************ 00:19:28.176 00:19:28.176 real 0m25.945s 00:19:28.176 user 0m28.458s 00:19:28.176 sys 0m1.035s 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.176 03:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:28.176 03:28:14 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:28.176 03:28:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:28.176 03:28:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.176 03:28:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:28.176 ************************************ 00:19:28.176 START TEST ftl_trim 00:19:28.176 ************************************ 00:19:28.176 03:28:14 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:28.176 * Looking for test storage... 00:19:28.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.176 03:28:15 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.176 --rc genhtml_branch_coverage=1 00:19:28.176 --rc genhtml_function_coverage=1 00:19:28.176 --rc genhtml_legend=1 00:19:28.176 --rc geninfo_all_blocks=1 00:19:28.176 --rc geninfo_unexecuted_blocks=1 00:19:28.176 00:19:28.176 ' 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.176 --rc genhtml_branch_coverage=1 00:19:28.176 --rc genhtml_function_coverage=1 00:19:28.176 --rc genhtml_legend=1 00:19:28.176 --rc geninfo_all_blocks=1 00:19:28.176 --rc geninfo_unexecuted_blocks=1 00:19:28.176 00:19:28.176 ' 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.176 --rc genhtml_branch_coverage=1 00:19:28.176 --rc genhtml_function_coverage=1 00:19:28.176 --rc genhtml_legend=1 00:19:28.176 --rc geninfo_all_blocks=1 00:19:28.176 --rc geninfo_unexecuted_blocks=1 00:19:28.176 00:19:28.176 ' 00:19:28.176 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.176 --rc genhtml_branch_coverage=1 00:19:28.176 --rc genhtml_function_coverage=1 00:19:28.176 --rc genhtml_legend=1 00:19:28.176 --rc geninfo_all_blocks=1 00:19:28.176 --rc geninfo_unexecuted_blocks=1 00:19:28.176 00:19:28.176 ' 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.176 03:28:15 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89042 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89042 00:19:28.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.177 03:28:15 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89042 ']' 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.177 03:28:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:28.177 [2024-11-21 03:28:15.235753] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:19:28.177 [2024-11-21 03:28:15.236140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89042 ] 00:19:28.177 [2024-11-21 03:28:15.377509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:28.177 [2024-11-21 03:28:15.400688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.177 [2024-11-21 03:28:15.432265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.177 [2024-11-21 03:28:15.432582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.177 [2024-11-21 03:28:15.432627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.749 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.749 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:28.749 03:28:16 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:29.010 03:28:16 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:29.010 03:28:16 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:29.010 03:28:16 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:29.010 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:29.010 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:29.010 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:29.010 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:29.010 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.272 { 00:19:29.272 "name": "nvme0n1", 00:19:29.272 "aliases": [ 00:19:29.272 "cd951d63-09ba-4845-b7c1-bae53598df62" 00:19:29.272 ], 00:19:29.272 "product_name": "NVMe disk", 00:19:29.272 "block_size": 4096, 00:19:29.272 "num_blocks": 1310720, 00:19:29.272 "uuid": "cd951d63-09ba-4845-b7c1-bae53598df62", 00:19:29.272 "numa_id": -1, 00:19:29.272 "assigned_rate_limits": { 00:19:29.272 "rw_ios_per_sec": 0, 00:19:29.272 "rw_mbytes_per_sec": 0, 00:19:29.272 "r_mbytes_per_sec": 0, 00:19:29.272 "w_mbytes_per_sec": 0 00:19:29.272 }, 00:19:29.272 "claimed": true, 00:19:29.272 "claim_type": "read_many_write_one", 00:19:29.272 "zoned": false, 00:19:29.272 "supported_io_types": { 00:19:29.272 "read": true, 00:19:29.272 "write": true, 00:19:29.272 "unmap": true, 00:19:29.272 "flush": true, 00:19:29.272 "reset": true, 00:19:29.272 "nvme_admin": true, 00:19:29.272 "nvme_io": true, 00:19:29.272 "nvme_io_md": false, 00:19:29.272 "write_zeroes": true, 00:19:29.272 "zcopy": false, 00:19:29.272 "get_zone_info": false, 00:19:29.272 "zone_management": false, 00:19:29.272 "zone_append": false, 00:19:29.272 "compare": true, 00:19:29.272 "compare_and_write": false, 00:19:29.272 "abort": true, 00:19:29.272 "seek_hole": false, 00:19:29.272 "seek_data": false, 00:19:29.272 "copy": true, 00:19:29.272 "nvme_iov_md": false 00:19:29.272 }, 00:19:29.272 "driver_specific": { 00:19:29.272 "nvme": [ 00:19:29.272 { 00:19:29.272 "pci_address": "0000:00:11.0", 00:19:29.272 "trid": { 00:19:29.272 "trtype": "PCIe", 00:19:29.272 "traddr": "0000:00:11.0" 00:19:29.272 }, 00:19:29.272 "ctrlr_data": { 00:19:29.272 "cntlid": 0, 00:19:29.272 "vendor_id": "0x1b36", 00:19:29.272 "model_number": "QEMU NVMe Ctrl", 00:19:29.272 "serial_number": "12341", 00:19:29.272 "firmware_revision": "8.0.0", 00:19:29.272 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:29.272 "oacs": { 00:19:29.272 "security": 0, 00:19:29.272 "format": 1, 00:19:29.272 "firmware": 0, 00:19:29.272 "ns_manage": 1 00:19:29.272 }, 00:19:29.272 "multi_ctrlr": false, 00:19:29.272 "ana_reporting": false 00:19:29.272 }, 00:19:29.272 "vs": { 00:19:29.272 "nvme_version": "1.4" 00:19:29.272 }, 00:19:29.272 "ns_data": { 00:19:29.272 "id": 1, 00:19:29.272 "can_share": false 00:19:29.272 } 00:19:29.272 } 00:19:29.272 ], 00:19:29.272 "mp_policy": "active_passive" 00:19:29.272 } 00:19:29.272 } 00:19:29.272 ]' 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:29.272 03:28:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:29.272 03:28:16 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:29.272 03:28:16 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:29.272 03:28:16 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:29.272 03:28:16 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:29.272 03:28:16 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:29.534 03:28:16 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=81eddf66-6d86-422b-ab89-ffd25c558de4 00:19:29.534 03:28:16 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:29.534 03:28:16 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81eddf66-6d86-422b-ab89-ffd25c558de4 00:19:29.795 03:28:17 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b87a609d-f524-4abb-8cb3-b7839adcc2a4 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b87a609d-f524-4abb-8cb3-b7839adcc2a4 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.056 03:28:17 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:30.057 03:28:17 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.057 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.057 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.057 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:30.057 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:30.057 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.316 { 00:19:30.316 "name": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:30.316 "aliases": [ 00:19:30.316 "lvs/nvme0n1p0" 00:19:30.316 ], 00:19:30.316 "product_name": "Logical Volume", 00:19:30.316 "block_size": 4096, 00:19:30.316 "num_blocks": 26476544, 00:19:30.316 "uuid": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:30.316 "assigned_rate_limits": { 00:19:30.316 "rw_ios_per_sec": 0, 00:19:30.316 "rw_mbytes_per_sec": 0, 00:19:30.316 "r_mbytes_per_sec": 0, 00:19:30.316 "w_mbytes_per_sec": 0 00:19:30.316 }, 00:19:30.316 "claimed": false, 00:19:30.316 "zoned": false, 00:19:30.316 "supported_io_types": { 00:19:30.316 "read": true, 00:19:30.316 "write": true, 00:19:30.316 "unmap": true, 00:19:30.316 "flush": false, 00:19:30.316 "reset": true, 00:19:30.316 "nvme_admin": false, 00:19:30.316 "nvme_io": false, 00:19:30.316 "nvme_io_md": false, 00:19:30.316 "write_zeroes": true, 00:19:30.316 "zcopy": false, 00:19:30.316 "get_zone_info": false, 00:19:30.316 "zone_management": false, 00:19:30.316 "zone_append": false, 00:19:30.316 "compare": false, 00:19:30.316 "compare_and_write": false, 00:19:30.316 "abort": false, 00:19:30.316 "seek_hole": true, 00:19:30.316 "seek_data": true, 00:19:30.316 "copy": false, 00:19:30.316 "nvme_iov_md": false 00:19:30.316 }, 00:19:30.316 "driver_specific": { 00:19:30.316 "lvol": { 00:19:30.316 "lvol_store_uuid": "b87a609d-f524-4abb-8cb3-b7839adcc2a4", 00:19:30.316 "base_bdev": "nvme0n1", 00:19:30.316 "thin_provision": true, 00:19:30.316 "num_allocated_clusters": 0, 00:19:30.316 "snapshot": false, 00:19:30.316 "clone": false, 00:19:30.316 "esnap_clone": false 00:19:30.316 } 00:19:30.316 } 00:19:30.316 } 00:19:30.316 ]' 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.316 03:28:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.316 03:28:17 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:30.316 03:28:17 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:30.316 03:28:17 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:30.575 03:28:18 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:30.575 03:28:18 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:30.575 03:28:18 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.575 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.575 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.575 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:30.575 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:30.575 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.833 { 00:19:30.833 "name": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:30.833 "aliases": [ 00:19:30.833 "lvs/nvme0n1p0" 00:19:30.833 ], 00:19:30.833 "product_name": "Logical Volume", 00:19:30.833 "block_size": 4096, 00:19:30.833 "num_blocks": 26476544, 00:19:30.833 "uuid": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:30.833 "assigned_rate_limits": { 00:19:30.833 "rw_ios_per_sec": 0, 00:19:30.833 "rw_mbytes_per_sec": 0, 00:19:30.833 "r_mbytes_per_sec": 0, 00:19:30.833 "w_mbytes_per_sec": 0 00:19:30.833 }, 00:19:30.833 "claimed": false, 00:19:30.833 "zoned": false, 00:19:30.833 "supported_io_types": { 00:19:30.833 "read": true, 00:19:30.833 "write": true, 00:19:30.833 "unmap": true, 00:19:30.833 "flush": false, 00:19:30.833 "reset": true, 00:19:30.833 "nvme_admin": false, 00:19:30.833 "nvme_io": false, 00:19:30.833 "nvme_io_md": false, 00:19:30.833 "write_zeroes": true, 00:19:30.833 "zcopy": false, 00:19:30.833 "get_zone_info": false, 00:19:30.833 "zone_management": false, 00:19:30.833 "zone_append": false, 00:19:30.833 "compare": false, 00:19:30.833 "compare_and_write": false, 00:19:30.833 "abort": false, 00:19:30.833 "seek_hole": true, 00:19:30.833 "seek_data": true, 00:19:30.833 "copy": false, 00:19:30.833 "nvme_iov_md": false 00:19:30.833 }, 00:19:30.833 "driver_specific": { 00:19:30.833 "lvol": { 00:19:30.833 "lvol_store_uuid": "b87a609d-f524-4abb-8cb3-b7839adcc2a4", 00:19:30.833 "base_bdev": "nvme0n1", 00:19:30.833 "thin_provision": true, 00:19:30.833 "num_allocated_clusters": 0, 00:19:30.833 "snapshot": false, 00:19:30.833 "clone": false, 00:19:30.833 "esnap_clone": false 00:19:30.833 } 00:19:30.833 } 00:19:30.833 } 00:19:30.833 ]' 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.833 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.833 03:28:18 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:30.833 03:28:18 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:31.092 03:28:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:31.092 03:28:18 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:31.092 03:28:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:31.092 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:31.092 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:31.092 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:31.092 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:31.092 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c4df47e-d247-4b40-9b0f-e18d097c3f0c 00:19:31.350 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:31.350 { 00:19:31.350 "name": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:31.350 "aliases": [ 00:19:31.350 "lvs/nvme0n1p0" 00:19:31.350 ], 00:19:31.350 "product_name": "Logical Volume", 00:19:31.350 "block_size": 4096, 00:19:31.350 "num_blocks": 26476544, 00:19:31.350 "uuid": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:31.350 "assigned_rate_limits": { 00:19:31.350 "rw_ios_per_sec": 0, 00:19:31.350 "rw_mbytes_per_sec": 0, 00:19:31.350 "r_mbytes_per_sec": 0, 00:19:31.350 "w_mbytes_per_sec": 0 00:19:31.350 }, 00:19:31.350 "claimed": false, 00:19:31.351 "zoned": false, 00:19:31.351 "supported_io_types": { 00:19:31.351 "read": true, 00:19:31.351 "write": true, 00:19:31.351 "unmap": true, 00:19:31.351 "flush": false, 00:19:31.351 "reset": true, 00:19:31.351 "nvme_admin": false, 00:19:31.351 "nvme_io": false, 00:19:31.351 "nvme_io_md": false, 00:19:31.351 "write_zeroes": true, 00:19:31.351 "zcopy": false, 00:19:31.351 "get_zone_info": false, 00:19:31.351 "zone_management": false, 00:19:31.351 "zone_append": false, 00:19:31.351 "compare": false, 00:19:31.351 "compare_and_write": false, 00:19:31.351 "abort": false, 00:19:31.351 "seek_hole": true, 00:19:31.351 "seek_data": true, 00:19:31.351 "copy": false, 00:19:31.351 "nvme_iov_md": false 00:19:31.351 }, 00:19:31.351 "driver_specific": { 00:19:31.351 "lvol": { 00:19:31.351 "lvol_store_uuid": "b87a609d-f524-4abb-8cb3-b7839adcc2a4", 00:19:31.351 "base_bdev": "nvme0n1", 00:19:31.351 "thin_provision": true, 00:19:31.351 "num_allocated_clusters": 0, 00:19:31.351 "snapshot": false, 00:19:31.351 "clone": false, 00:19:31.351 "esnap_clone": false 00:19:31.351 } 00:19:31.351 } 00:19:31.351 } 00:19:31.351 ]' 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:31.351 03:28:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:31.351 03:28:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:31.351 03:28:18 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c4df47e-d247-4b40-9b0f-e18d097c3f0c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:31.610 [2024-11-21 03:28:19.037204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.037247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.610 [2024-11-21 03:28:19.037259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.610 [2024-11-21 03:28:19.037268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.039172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.039199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.610 [2024-11-21 03:28:19.039210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:19:31.610 [2024-11-21 03:28:19.039219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.039290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.610 [2024-11-21 03:28:19.039484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.610 [2024-11-21 03:28:19.039517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.039524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.610 [2024-11-21 03:28:19.039534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:19:31.610 [2024-11-21 03:28:19.039539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.039617] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:19:31.610 [2024-11-21 03:28:19.040589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.040700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:31.610 [2024-11-21 03:28:19.040713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:31.610 [2024-11-21 03:28:19.040722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.045930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.046024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.610 [2024-11-21 03:28:19.046074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:19:31.610 [2024-11-21 03:28:19.046097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.046214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.046244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.610 [2024-11-21 03:28:19.046270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:31.610 [2024-11-21 03:28:19.046319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.046362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.046387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.610 [2024-11-21 03:28:19.046405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.610 [2024-11-21 03:28:19.046421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.046466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.610 [2024-11-21 03:28:19.047924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.048002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.610 [2024-11-21 03:28:19.048087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:19:31.610 [2024-11-21 03:28:19.048107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.048178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.048266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.610 [2024-11-21 03:28:19.048289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:31.610 [2024-11-21 03:28:19.048314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.048350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:31.610 [2024-11-21 03:28:19.048506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.610 [2024-11-21 03:28:19.048584] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.610 [2024-11-21 03:28:19.048638] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.610 [2024-11-21 03:28:19.048677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.610 [2024-11-21 03:28:19.048769] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.610 [2024-11-21 03:28:19.048797] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.610 [2024-11-21 03:28:19.048811] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.610 [2024-11-21 03:28:19.048827] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.610 [2024-11-21 03:28:19.048936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.610 [2024-11-21 03:28:19.048961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.610 [2024-11-21 03:28:19.048976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.610 [2024-11-21 03:28:19.049053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:19:31.610 [2024-11-21 03:28:19.049070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.610 [2024-11-21 03:28:19.049163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.611 [2024-11-21 03:28:19.049180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.611 [2024-11-21 03:28:19.049197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:31.611 [2024-11-21 03:28:19.049234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.611 [2024-11-21 03:28:19.049348] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.611 [2024-11-21 03:28:19.049397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.611 [2024-11-21 03:28:19.049417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.611 [2024-11-21 03:28:19.049478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.611 [2024-11-21 03:28:19.049511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.611 [2024-11-21 03:28:19.049543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.611 [2024-11-21 03:28:19.049559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.611 [2024-11-21 03:28:19.049625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.611 [2024-11-21 03:28:19.049640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.611 [2024-11-21 03:28:19.049657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.611 [2024-11-21 03:28:19.049672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.611 [2024-11-21 03:28:19.049716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:31.611 [2024-11-21 03:28:19.049733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.611 [2024-11-21 03:28:19.049763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:31.611 [2024-11-21 03:28:19.049778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.611 [2024-11-21 03:28:19.049835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.611 [2024-11-21 03:28:19.049945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.611 [2024-11-21 03:28:19.049965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.611 [2024-11-21 03:28:19.049980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.611 [2024-11-21 03:28:19.050002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.611 [2024-11-21 03:28:19.050018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.611 [2024-11-21 03:28:19.050120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.611 [2024-11-21 03:28:19.050134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.611 [2024-11-21 03:28:19.050164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.611 [2024-11-21 03:28:19.050215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.611 [2024-11-21 03:28:19.050249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.611 [2024-11-21 03:28:19.050263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:31.611 [2024-11-21 03:28:19.050278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.611 [2024-11-21 03:28:19.050319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.611 [2024-11-21 03:28:19.050338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:31.611 [2024-11-21 03:28:19.050353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.611 [2024-11-21 03:28:19.050383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:31.611 [2024-11-21 03:28:19.050398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050438] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.611 [2024-11-21 03:28:19.050461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.611 [2024-11-21 03:28:19.050476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.611 [2024-11-21 03:28:19.050502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.611 [2024-11-21 03:28:19.050517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.611 [2024-11-21 03:28:19.050533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.611 [2024-11-21 03:28:19.050548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.611 [2024-11-21 03:28:19.050590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.611 [2024-11-21 03:28:19.050607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.611 [2024-11-21 03:28:19.050623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.611 [2024-11-21 03:28:19.050641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.611 [2024-11-21 03:28:19.050666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.050690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.611 [2024-11-21 03:28:19.050750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:31.611 [2024-11-21 03:28:19.050788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.611 [2024-11-21 03:28:19.050812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:31.611 [2024-11-21 03:28:19.050833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:31.611 [2024-11-21 03:28:19.050886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:31.611 [2024-11-21 03:28:19.050928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:31.611 [2024-11-21 03:28:19.050955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:31.611 [2024-11-21 03:28:19.050976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:31.611 [2024-11-21 03:28:19.051035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:31.611 [2024-11-21 03:28:19.051235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.611 [2024-11-21 03:28:19.051264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.611 [2024-11-21 03:28:19.051337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.611 [2024-11-21 03:28:19.051360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.611 [2024-11-21 03:28:19.051403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.611 [2024-11-21 03:28:19.051464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.611 [2024-11-21 03:28:19.051485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.611 [2024-11-21 03:28:19.051500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:19:31.611 [2024-11-21 03:28:19.051516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.611 [2024-11-21 03:28:19.051621] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:31.611 [2024-11-21 03:28:19.051652] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:34.140 [2024-11-21 03:28:21.504648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.504810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:34.140 [2024-11-21 03:28:21.504913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2453.017 ms 00:19:34.140 [2024-11-21 03:28:21.504944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.513386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.513524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.140 [2024-11-21 03:28:21.513587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:19:34.140 [2024-11-21 03:28:21.513616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.513786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.513861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.140 [2024-11-21 03:28:21.513946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:34.140 [2024-11-21 03:28:21.513973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.532736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.532961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.140 [2024-11-21 03:28:21.532988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.705 ms 00:19:34.140 [2024-11-21 03:28:21.533003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.533116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.533140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.140 [2024-11-21 03:28:21.533154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:34.140 [2024-11-21 03:28:21.533167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.533564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.533596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.140 [2024-11-21 03:28:21.533612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:19:34.140 [2024-11-21 03:28:21.533643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.533847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.533872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.140 [2024-11-21 03:28:21.533930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:34.140 [2024-11-21 03:28:21.533944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.540658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.540702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.140 [2024-11-21 03:28:21.540717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:19:34.140 [2024-11-21 03:28:21.540732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.549265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:34.140 [2024-11-21 03:28:21.563577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.140 [2024-11-21 03:28:21.563607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:34.140 [2024-11-21 03:28:21.563620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.643 ms 00:19:34.140 [2024-11-21 03:28:21.563627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.140 [2024-11-21 03:28:21.625765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.625806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:34.141 [2024-11-21 03:28:21.625822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.070 ms 00:19:34.141 [2024-11-21 03:28:21.625833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.626045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.626057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:34.141 [2024-11-21 03:28:21.626086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:19:34.141 [2024-11-21 03:28:21.626095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.629263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.629389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:34.141 [2024-11-21 03:28:21.629408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:19:34.141 [2024-11-21 03:28:21.629416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.632090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.632120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:34.141 [2024-11-21 03:28:21.632131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:19:34.141 [2024-11-21 03:28:21.632138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.632443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.632454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.141 [2024-11-21 03:28:21.632476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:34.141 [2024-11-21 03:28:21.632484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.664700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.664730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:34.141 [2024-11-21 03:28:21.664745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.169 ms 00:19:34.141 [2024-11-21 03:28:21.664764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.668908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.668935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:34.141 [2024-11-21 03:28:21.668946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:19:34.141 [2024-11-21 03:28:21.668954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.672082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.672109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:34.141 [2024-11-21 03:28:21.672120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:19:34.141 [2024-11-21 03:28:21.672127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.675396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.675520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.141 [2024-11-21 03:28:21.675539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:19:34.141 [2024-11-21 03:28:21.675546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.675653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.675665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.141 [2024-11-21 03:28:21.675675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:34.141 [2024-11-21 03:28:21.675683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.675761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.141 [2024-11-21 03:28:21.675770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.141 [2024-11-21 03:28:21.675780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:34.141 [2024-11-21 03:28:21.675787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.141 [2024-11-21 03:28:21.676627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.141 [2024-11-21 03:28:21.677573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2639.136 ms, result 0 00:19:34.141 [2024-11-21 03:28:21.678164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.141 { 00:19:34.141 "name": "ftl0", 00:19:34.141 "uuid": "4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1" 00:19:34.141 } 00:19:34.141 03:28:21 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.141 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:34.399 03:28:21 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:34.657 [ 00:19:34.657 { 00:19:34.657 "name": "ftl0", 00:19:34.657 "aliases": [ 00:19:34.657 "4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1" 00:19:34.657 ], 00:19:34.657 "product_name": "FTL disk", 00:19:34.657 "block_size": 4096, 00:19:34.657 "num_blocks": 23592960, 00:19:34.657 "uuid": "4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1", 00:19:34.657 "assigned_rate_limits": { 00:19:34.657 "rw_ios_per_sec": 0, 00:19:34.657 "rw_mbytes_per_sec": 0, 00:19:34.657 "r_mbytes_per_sec": 0, 00:19:34.657 "w_mbytes_per_sec": 0 00:19:34.657 }, 00:19:34.657 "claimed": false, 00:19:34.657 "zoned": false, 00:19:34.657 "supported_io_types": { 00:19:34.657 "read": true, 00:19:34.657 "write": true, 00:19:34.657 "unmap": true, 00:19:34.657 "flush": true, 00:19:34.657 "reset": false, 00:19:34.657 "nvme_admin": false, 00:19:34.657 "nvme_io": false, 00:19:34.657 "nvme_io_md": false, 00:19:34.657 "write_zeroes": true, 00:19:34.657 "zcopy": false, 00:19:34.657 "get_zone_info": false, 00:19:34.657 "zone_management": false, 00:19:34.657 "zone_append": false, 00:19:34.657 "compare": false, 00:19:34.657 "compare_and_write": false, 00:19:34.657 "abort": false, 00:19:34.657 "seek_hole": false, 00:19:34.657 "seek_data": false, 00:19:34.657 "copy": false, 00:19:34.657 "nvme_iov_md": false 00:19:34.657 }, 00:19:34.657 "driver_specific": { 00:19:34.657 "ftl": { 00:19:34.657 "base_bdev": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:34.657 "cache": "nvc0n1p0" 00:19:34.657 } 00:19:34.657 } 00:19:34.657 } 00:19:34.657 ] 00:19:34.657 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:34.657 03:28:22 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:34.657 03:28:22 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:34.916 03:28:22 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:34.916 03:28:22 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:35.176 03:28:22 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:35.176 { 00:19:35.176 "name": "ftl0", 00:19:35.176 "aliases": [ 00:19:35.176 "4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1" 00:19:35.176 ], 00:19:35.176 "product_name": "FTL disk", 00:19:35.176 "block_size": 4096, 00:19:35.176 "num_blocks": 23592960, 00:19:35.176 "uuid": "4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1", 00:19:35.176 "assigned_rate_limits": { 00:19:35.176 "rw_ios_per_sec": 0, 00:19:35.176 "rw_mbytes_per_sec": 0, 00:19:35.176 "r_mbytes_per_sec": 0, 00:19:35.176 "w_mbytes_per_sec": 0 00:19:35.176 }, 00:19:35.176 "claimed": false, 00:19:35.176 "zoned": false, 00:19:35.176 "supported_io_types": { 00:19:35.176 "read": true, 00:19:35.176 "write": true, 00:19:35.176 "unmap": true, 00:19:35.176 "flush": true, 00:19:35.176 "reset": false, 00:19:35.176 "nvme_admin": false, 00:19:35.176 "nvme_io": false, 00:19:35.176 "nvme_io_md": false, 00:19:35.176 "write_zeroes": true, 00:19:35.176 "zcopy": false, 00:19:35.176 "get_zone_info": false, 00:19:35.176 "zone_management": false, 00:19:35.176 "zone_append": false, 00:19:35.176 "compare": false, 00:19:35.176 "compare_and_write": false, 00:19:35.176 "abort": false, 00:19:35.176 "seek_hole": false, 00:19:35.176 "seek_data": false, 00:19:35.176 "copy": false, 00:19:35.176 "nvme_iov_md": false 00:19:35.176 }, 00:19:35.176 "driver_specific": { 00:19:35.176 "ftl": { 00:19:35.176 "base_bdev": "1c4df47e-d247-4b40-9b0f-e18d097c3f0c", 00:19:35.176 "cache": "nvc0n1p0" 00:19:35.176 } 00:19:35.176 } 00:19:35.176 } 00:19:35.176 ]' 00:19:35.176 03:28:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:35.176 03:28:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:35.176 03:28:22 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:35.176 [2024-11-21 03:28:22.679093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.176 [2024-11-21 03:28:22.679220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.176 [2024-11-21 03:28:22.679280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.176 [2024-11-21 03:28:22.679309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.176 [2024-11-21 03:28:22.679362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:35.176 [2024-11-21 03:28:22.679963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.176 [2024-11-21 03:28:22.680051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.176 [2024-11-21 03:28:22.680105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:19:35.176 [2024-11-21 03:28:22.680131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.176 [2024-11-21 03:28:22.680776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.176 [2024-11-21 03:28:22.680845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.177 [2024-11-21 03:28:22.680861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:19:35.177 [2024-11-21 03:28:22.680869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.684587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.684653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.177 [2024-11-21 03:28:22.684701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.669 ms 00:19:35.177 [2024-11-21 03:28:22.684724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.691757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.691850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:35.177 [2024-11-21 03:28:22.691914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.974 ms 00:19:35.177 [2024-11-21 03:28:22.691953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.693578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.693669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.177 [2024-11-21 03:28:22.693722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:19:35.177 [2024-11-21 03:28:22.693744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.697986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.698087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.177 [2024-11-21 03:28:22.698183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:19:35.177 [2024-11-21 03:28:22.698208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.698456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.698513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.177 [2024-11-21 03:28:22.698560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:35.177 [2024-11-21 03:28:22.698582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.700279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.700370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:35.177 [2024-11-21 03:28:22.700426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:19:35.177 [2024-11-21 03:28:22.700449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.701847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.701952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:35.177 [2024-11-21 03:28:22.702014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:19:35.177 [2024-11-21 03:28:22.702065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.703326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.703416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.177 [2024-11-21 03:28:22.703467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:19:35.177 [2024-11-21 03:28:22.703489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.704741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.177 [2024-11-21 03:28:22.704831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.177 [2024-11-21 03:28:22.704887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:19:35.177 [2024-11-21 03:28:22.704920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.177 [2024-11-21 03:28:22.705058] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.177 [2024-11-21 03:28:22.705089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.705838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.177 [2024-11-21 03:28:22.706440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.178 [2024-11-21 03:28:22.706810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.178 [2024-11-21 03:28:22.706821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:19:35.178 [2024-11-21 03:28:22.706828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.178 [2024-11-21 03:28:22.706838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.178 [2024-11-21 03:28:22.706845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.178 [2024-11-21 03:28:22.706860] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.178 [2024-11-21 03:28:22.706867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.178 [2024-11-21 03:28:22.706877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.178 [2024-11-21 03:28:22.706884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.178 [2024-11-21 03:28:22.706892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.178 [2024-11-21 03:28:22.707102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.178 [2024-11-21 03:28:22.707127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.178 [2024-11-21 03:28:22.707147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.178 [2024-11-21 03:28:22.707196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:19:35.178 [2024-11-21 03:28:22.707218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.708738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.178 [2024-11-21 03:28:22.708824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.178 [2024-11-21 03:28:22.708875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:19:35.178 [2024-11-21 03:28:22.708908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.709092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.178 [2024-11-21 03:28:22.709167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.178 [2024-11-21 03:28:22.709217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:35.178 [2024-11-21 03:28:22.709239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.714557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.714660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.178 [2024-11-21 03:28:22.714713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.714736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.714871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.714944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.178 [2024-11-21 03:28:22.714961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.714969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.715035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.715045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.178 [2024-11-21 03:28:22.715055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.715061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.715095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.715103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.178 [2024-11-21 03:28:22.715122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.715129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.724486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.724595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.178 [2024-11-21 03:28:22.724671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.724695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.732422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.732537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.178 [2024-11-21 03:28:22.732594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.732627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.732812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.732874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.178 [2024-11-21 03:28:22.732939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.732962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.733032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.733055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.178 [2024-11-21 03:28:22.733078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.178 [2024-11-21 03:28:22.733122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.178 [2024-11-21 03:28:22.733240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.178 [2024-11-21 03:28:22.733265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.179 [2024-11-21 03:28:22.733328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.179 [2024-11-21 03:28:22.733351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-11-21 03:28:22.733507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.179 [2024-11-21 03:28:22.733585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.179 [2024-11-21 03:28:22.733635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.179 [2024-11-21 03:28:22.733656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-11-21 03:28:22.733717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.179 [2024-11-21 03:28:22.733740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.179 [2024-11-21 03:28:22.733764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.179 [2024-11-21 03:28:22.733808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-11-21 03:28:22.733875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.179 [2024-11-21 03:28:22.733916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.179 [2024-11-21 03:28:22.733997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.179 [2024-11-21 03:28:22.734020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-11-21 03:28:22.734239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.107 ms, result 0 00:19:35.179 true 00:19:35.437 03:28:22 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89042 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89042 ']' 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89042 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89042 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.437 killing process with pid 89042 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89042' 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89042 00:19:35.437 03:28:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89042 00:19:40.726 03:28:27 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:41.296 65536+0 records in 00:19:41.296 65536+0 records out 00:19:41.296 268435456 bytes (268 MB, 256 MiB) copied, 0.801979 s, 335 MB/s 00:19:41.296 03:28:28 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.296 [2024-11-21 03:28:28.849223] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:19:41.296 [2024-11-21 03:28:28.849332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89207 ] 00:19:41.558 [2024-11-21 03:28:28.980495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:41.558 [2024-11-21 03:28:29.012230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.558 [2024-11-21 03:28:29.032396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.819 [2024-11-21 03:28:29.124408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.819 [2024-11-21 03:28:29.124471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.819 [2024-11-21 03:28:29.283016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.283068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.819 [2024-11-21 03:28:29.283083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.819 [2024-11-21 03:28:29.283096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.285587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.285636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.819 [2024-11-21 03:28:29.285647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:19:41.819 [2024-11-21 03:28:29.285656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.285751] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.819 [2024-11-21 03:28:29.286034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.819 [2024-11-21 03:28:29.286055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.286064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.819 [2024-11-21 03:28:29.286074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:19:41.819 [2024-11-21 03:28:29.286082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.287766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:41.819 [2024-11-21 03:28:29.291468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.291668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:41.819 [2024-11-21 03:28:29.291693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:19:41.819 [2024-11-21 03:28:29.291701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.292119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.292161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:41.819 [2024-11-21 03:28:29.292176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:41.819 [2024-11-21 03:28:29.292188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.300101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.300142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.819 [2024-11-21 03:28:29.300159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.855 ms 00:19:41.819 [2024-11-21 03:28:29.300169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.819 [2024-11-21 03:28:29.300312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.819 [2024-11-21 03:28:29.300325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.819 [2024-11-21 03:28:29.300335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:41.820 [2024-11-21 03:28:29.300344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.300374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.820 [2024-11-21 03:28:29.300383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.820 [2024-11-21 03:28:29.300392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.820 [2024-11-21 03:28:29.300400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.300421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:41.820 [2024-11-21 03:28:29.302347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.820 [2024-11-21 03:28:29.302557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.820 [2024-11-21 03:28:29.302576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:19:41.820 [2024-11-21 03:28:29.302593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.302647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.820 [2024-11-21 03:28:29.302656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.820 [2024-11-21 03:28:29.302665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:41.820 [2024-11-21 03:28:29.302674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.302692] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:41.820 [2024-11-21 03:28:29.302714] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:41.820 [2024-11-21 03:28:29.302750] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:41.820 [2024-11-21 03:28:29.302769] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:41.820 [2024-11-21 03:28:29.302876] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:41.820 [2024-11-21 03:28:29.302889] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.820 [2024-11-21 03:28:29.302936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:41.820 [2024-11-21 03:28:29.302948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.820 [2024-11-21 03:28:29.302957] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.820 [2024-11-21 03:28:29.302965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:41.820 [2024-11-21 03:28:29.302974] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.820 [2024-11-21 03:28:29.302986] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:41.820 [2024-11-21 03:28:29.302994] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:41.820 [2024-11-21 03:28:29.303004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.820 [2024-11-21 03:28:29.303012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.820 [2024-11-21 03:28:29.303021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:41.820 [2024-11-21 03:28:29.303028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.303117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.820 [2024-11-21 03:28:29.303128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.820 [2024-11-21 03:28:29.303138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:41.820 [2024-11-21 03:28:29.303146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.820 [2024-11-21 03:28:29.303257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.820 [2024-11-21 03:28:29.303272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.820 [2024-11-21 03:28:29.303285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.820 [2024-11-21 03:28:29.303315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.820 [2024-11-21 03:28:29.303347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.820 [2024-11-21 03:28:29.303364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.820 [2024-11-21 03:28:29.303372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:41.820 [2024-11-21 03:28:29.303380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.820 [2024-11-21 03:28:29.303388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.820 [2024-11-21 03:28:29.303396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:41.820 [2024-11-21 03:28:29.303403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.820 [2024-11-21 03:28:29.303420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.820 [2024-11-21 03:28:29.303443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.820 [2024-11-21 03:28:29.303474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.820 [2024-11-21 03:28:29.303497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.820 [2024-11-21 03:28:29.303522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.820 [2024-11-21 03:28:29.303544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.820 [2024-11-21 03:28:29.303557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.820 [2024-11-21 03:28:29.303564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:41.820 [2024-11-21 03:28:29.303570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.820 [2024-11-21 03:28:29.303576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:41.820 [2024-11-21 03:28:29.303586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:41.820 [2024-11-21 03:28:29.303592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:41.820 [2024-11-21 03:28:29.303607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:41.820 [2024-11-21 03:28:29.303614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.820 [2024-11-21 03:28:29.303629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.820 [2024-11-21 03:28:29.303640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.820 [2024-11-21 03:28:29.303656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.820 [2024-11-21 03:28:29.303664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.820 [2024-11-21 03:28:29.303671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.820 [2024-11-21 03:28:29.303678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.820 [2024-11-21 03:28:29.303685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.820 [2024-11-21 03:28:29.303691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.820 [2024-11-21 03:28:29.303701] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.820 [2024-11-21 03:28:29.303713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.820 [2024-11-21 03:28:29.303721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:41.820 [2024-11-21 03:28:29.303728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:41.820 [2024-11-21 03:28:29.303735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:41.820 [2024-11-21 03:28:29.303742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:41.820 [2024-11-21 03:28:29.303750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:41.821 [2024-11-21 03:28:29.303757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:41.821 [2024-11-21 03:28:29.303764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:41.821 [2024-11-21 03:28:29.303770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:41.821 [2024-11-21 03:28:29.303777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:41.821 [2024-11-21 03:28:29.303784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:41.821 [2024-11-21 03:28:29.303819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.821 [2024-11-21 03:28:29.303831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.821 [2024-11-21 03:28:29.303848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.821 [2024-11-21 03:28:29.303855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.821 [2024-11-21 03:28:29.303863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.821 [2024-11-21 03:28:29.303870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.303877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.821 [2024-11-21 03:28:29.303885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:19:41.821 [2024-11-21 03:28:29.303893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.317458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.317642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.821 [2024-11-21 03:28:29.317659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.497 ms 00:19:41.821 [2024-11-21 03:28:29.317668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.317800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.317817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.821 [2024-11-21 03:28:29.317831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:41.821 [2024-11-21 03:28:29.317839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.336879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.336945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.821 [2024-11-21 03:28:29.336967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.015 ms 00:19:41.821 [2024-11-21 03:28:29.336979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.337072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.337085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.821 [2024-11-21 03:28:29.337101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.821 [2024-11-21 03:28:29.337111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.337619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.337659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.821 [2024-11-21 03:28:29.337672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:19:41.821 [2024-11-21 03:28:29.337680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.337839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.337853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.821 [2024-11-21 03:28:29.337863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:41.821 [2024-11-21 03:28:29.337871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.346090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.346133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.821 [2024-11-21 03:28:29.346144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.195 ms 00:19:41.821 [2024-11-21 03:28:29.346152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.350173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:41.821 [2024-11-21 03:28:29.350222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.821 [2024-11-21 03:28:29.350234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.350243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.821 [2024-11-21 03:28:29.350252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:19:41.821 [2024-11-21 03:28:29.350260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.366215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.366278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.821 [2024-11-21 03:28:29.366290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:19:41.821 [2024-11-21 03:28:29.366298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.369038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.369078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.821 [2024-11-21 03:28:29.369089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:19:41.821 [2024-11-21 03:28:29.369097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.371493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.371679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.821 [2024-11-21 03:28:29.371698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:19:41.821 [2024-11-21 03:28:29.371706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.821 [2024-11-21 03:28:29.372073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.821 [2024-11-21 03:28:29.372096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.821 [2024-11-21 03:28:29.372109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:41.821 [2024-11-21 03:28:29.372119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.396728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.396783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:42.083 [2024-11-21 03:28:29.396795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.584 ms 00:19:42.083 [2024-11-21 03:28:29.396803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.404888] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:42.083 [2024-11-21 03:28:29.423522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.423577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.083 [2024-11-21 03:28:29.423591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.631 ms 00:19:42.083 [2024-11-21 03:28:29.423599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.423685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.423700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:42.083 [2024-11-21 03:28:29.423710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:42.083 [2024-11-21 03:28:29.423722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.423776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.423786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.083 [2024-11-21 03:28:29.423799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:42.083 [2024-11-21 03:28:29.423808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.423832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.423841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.083 [2024-11-21 03:28:29.423851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:42.083 [2024-11-21 03:28:29.423858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.423934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:42.083 [2024-11-21 03:28:29.423946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.423954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:42.083 [2024-11-21 03:28:29.423962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:42.083 [2024-11-21 03:28:29.423978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.429824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.429870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.083 [2024-11-21 03:28:29.429881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.820 ms 00:19:42.083 [2024-11-21 03:28:29.429889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.430020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.083 [2024-11-21 03:28:29.430031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.083 [2024-11-21 03:28:29.430041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:42.083 [2024-11-21 03:28:29.430050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.083 [2024-11-21 03:28:29.431077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:42.083 [2024-11-21 03:28:29.432392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 147.736 ms, result 0 00:19:42.083 [2024-11-21 03:28:29.433537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.083 [2024-11-21 03:28:29.440918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.027  [2024-11-21T03:28:31.536Z] Copying: 14/256 [MB] (14 MBps) [2024-11-21T03:28:32.479Z] Copying: 33/256 [MB] (19 MBps) [2024-11-21T03:28:33.868Z] Copying: 49/256 [MB] (16 MBps) [2024-11-21T03:28:34.811Z] Copying: 60852/262144 [kB] (10144 kBps) [2024-11-21T03:28:35.756Z] Copying: 93/256 [MB] (34 MBps) [2024-11-21T03:28:36.699Z] Copying: 127/256 [MB] (34 MBps) [2024-11-21T03:28:37.643Z] Copying: 145/256 [MB] (17 MBps) [2024-11-21T03:28:38.587Z] Copying: 169/256 [MB] (23 MBps) [2024-11-21T03:28:39.531Z] Copying: 187/256 [MB] (18 MBps) [2024-11-21T03:28:40.475Z] Copying: 206/256 [MB] (18 MBps) [2024-11-21T03:28:41.421Z] Copying: 244/256 [MB] (38 MBps) [2024-11-21T03:28:41.421Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-21 03:28:41.169366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.856 [2024-11-21 03:28:41.171316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.171371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.856 [2024-11-21 03:28:41.171385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:53.856 [2024-11-21 03:28:41.171395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.171421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.856 [2024-11-21 03:28:41.172126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.172159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.856 [2024-11-21 03:28:41.172171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:19:53.856 [2024-11-21 03:28:41.172181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.175077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.175120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.856 [2024-11-21 03:28:41.175132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:19:53.856 [2024-11-21 03:28:41.175146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.183045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.183097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.856 [2024-11-21 03:28:41.183108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.880 ms 00:19:53.856 [2024-11-21 03:28:41.183116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.190022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.190070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.856 [2024-11-21 03:28:41.190081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:19:53.856 [2024-11-21 03:28:41.190089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.192590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.192777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.856 [2024-11-21 03:28:41.192796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:19:53.856 [2024-11-21 03:28:41.192816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-11-21 03:28:41.197262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-11-21 03:28:41.197433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.856 [2024-11-21 03:28:41.197517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:19:53.856 [2024-11-21 03:28:41.197546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.197726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-11-21 03:28:41.197792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.857 [2024-11-21 03:28:41.197891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:53.857 [2024-11-21 03:28:41.197936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.200518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-11-21 03:28:41.200681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.857 [2024-11-21 03:28:41.200747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:19:53.857 [2024-11-21 03:28:41.200770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.203112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-11-21 03:28:41.203273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.857 [2024-11-21 03:28:41.203331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:19:53.857 [2024-11-21 03:28:41.203372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.205097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-11-21 03:28:41.205250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.857 [2024-11-21 03:28:41.205311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:19:53.857 [2024-11-21 03:28:41.205334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.207045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-11-21 03:28:41.207196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.857 [2024-11-21 03:28:41.207254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:19:53.857 [2024-11-21 03:28:41.207275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-11-21 03:28:41.207319] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.857 [2024-11-21 03:28:41.207352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.207995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.208973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.857 [2024-11-21 03:28:41.209931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.209982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.858 [2024-11-21 03:28:41.210211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.858 [2024-11-21 03:28:41.210220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:19:53.858 [2024-11-21 03:28:41.210238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.858 [2024-11-21 03:28:41.210250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.858 [2024-11-21 03:28:41.210258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.858 [2024-11-21 03:28:41.210266] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.858 [2024-11-21 03:28:41.210274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.858 [2024-11-21 03:28:41.210282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.858 [2024-11-21 03:28:41.210290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.858 [2024-11-21 03:28:41.210297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.858 [2024-11-21 03:28:41.210303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.858 [2024-11-21 03:28:41.210312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-11-21 03:28:41.210331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.858 [2024-11-21 03:28:41.210341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:19:53.858 [2024-11-21 03:28:41.210352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.212642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-11-21 03:28:41.212686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.858 [2024-11-21 03:28:41.212698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:19:53.858 [2024-11-21 03:28:41.212707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.212833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-11-21 03:28:41.212842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.858 [2024-11-21 03:28:41.212851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:53.858 [2024-11-21 03:28:41.212859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.220834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.220883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.858 [2024-11-21 03:28:41.220916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.220925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.221006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.221016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.858 [2024-11-21 03:28:41.221029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.221038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.221082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.221092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.858 [2024-11-21 03:28:41.221100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.221108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.221124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.221135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.858 [2024-11-21 03:28:41.221143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.221150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.234479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.234531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.858 [2024-11-21 03:28:41.234543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.234551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.858 [2024-11-21 03:28:41.245160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.245168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.858 [2024-11-21 03:28:41.245232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.245240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.858 [2024-11-21 03:28:41.245293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.245302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.858 [2024-11-21 03:28:41.245400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.245407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.858 [2024-11-21 03:28:41.245459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.858 [2024-11-21 03:28:41.245469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-11-21 03:28:41.245510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.858 [2024-11-21 03:28:41.245519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.858 [2024-11-21 03:28:41.245528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.859 [2024-11-21 03:28:41.245535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.859 [2024-11-21 03:28:41.245582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.859 [2024-11-21 03:28:41.245592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.859 [2024-11-21 03:28:41.245603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.859 [2024-11-21 03:28:41.245611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.859 [2024-11-21 03:28:41.245761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.424 ms, result 0 00:19:54.430 00:19:54.430 00:19:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.430 03:28:41 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89348 00:19:54.430 03:28:41 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89348 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89348 ']' 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.430 03:28:41 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.430 03:28:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:54.430 [2024-11-21 03:28:41.869197] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:19:54.430 [2024-11-21 03:28:41.870107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89348 ] 00:19:54.691 [2024-11-21 03:28:42.005946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:54.691 [2024-11-21 03:28:42.036524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.691 [2024-11-21 03:28:42.065127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.264 03:28:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.264 03:28:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:55.264 03:28:42 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:55.525 [2024-11-21 03:28:42.935629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.525 [2024-11-21 03:28:42.935877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.787 [2024-11-21 03:28:43.114248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.114472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.787 [2024-11-21 03:28:43.114678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.787 [2024-11-21 03:28:43.114729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.117356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.117526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.787 [2024-11-21 03:28:43.117605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:19:55.787 [2024-11-21 03:28:43.117630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.117775] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.787 [2024-11-21 03:28:43.118330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.787 [2024-11-21 03:28:43.118993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.119022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.787 [2024-11-21 03:28:43.119036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:19:55.787 [2024-11-21 03:28:43.119045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.120872] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.787 [2024-11-21 03:28:43.124770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.124849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.787 [2024-11-21 03:28:43.124869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:19:55.787 [2024-11-21 03:28:43.124879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.124993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.125010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.787 [2024-11-21 03:28:43.125020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:55.787 [2024-11-21 03:28:43.125030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.133128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.133176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.787 [2024-11-21 03:28:43.133186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.028 ms 00:19:55.787 [2024-11-21 03:28:43.133197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.133313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.133328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.787 [2024-11-21 03:28:43.133341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:55.787 [2024-11-21 03:28:43.133354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.133379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.133392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.787 [2024-11-21 03:28:43.133400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:55.787 [2024-11-21 03:28:43.133408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.133434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.787 [2024-11-21 03:28:43.135472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.135511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.787 [2024-11-21 03:28:43.135526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:19:55.787 [2024-11-21 03:28:43.135534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.135574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.135583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.787 [2024-11-21 03:28:43.135593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:55.787 [2024-11-21 03:28:43.135601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.135625] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.787 [2024-11-21 03:28:43.135645] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.787 [2024-11-21 03:28:43.135687] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.787 [2024-11-21 03:28:43.135706] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.787 [2024-11-21 03:28:43.135814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.787 [2024-11-21 03:28:43.135828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.787 [2024-11-21 03:28:43.135843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.787 [2024-11-21 03:28:43.135854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.787 [2024-11-21 03:28:43.135867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.787 [2024-11-21 03:28:43.135879] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.787 [2024-11-21 03:28:43.135888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.787 [2024-11-21 03:28:43.135916] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.787 [2024-11-21 03:28:43.135928] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.787 [2024-11-21 03:28:43.135936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.135946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.787 [2024-11-21 03:28:43.135954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:55.787 [2024-11-21 03:28:43.135964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.136067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.787 [2024-11-21 03:28:43.136078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.787 [2024-11-21 03:28:43.136087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:55.787 [2024-11-21 03:28:43.136096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.787 [2024-11-21 03:28:43.136201] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.787 [2024-11-21 03:28:43.136217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.787 [2024-11-21 03:28:43.136225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.787 [2024-11-21 03:28:43.136238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.787 [2024-11-21 03:28:43.136256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.787 [2024-11-21 03:28:43.136271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.787 [2024-11-21 03:28:43.136278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.787 [2024-11-21 03:28:43.136300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.787 [2024-11-21 03:28:43.136308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.787 [2024-11-21 03:28:43.136314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.787 [2024-11-21 03:28:43.136323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.787 [2024-11-21 03:28:43.136329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.787 [2024-11-21 03:28:43.136338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.787 [2024-11-21 03:28:43.136352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.787 [2024-11-21 03:28:43.136358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.787 [2024-11-21 03:28:43.136378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.787 [2024-11-21 03:28:43.136394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.787 [2024-11-21 03:28:43.136402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.787 [2024-11-21 03:28:43.136417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.787 [2024-11-21 03:28:43.136424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.787 [2024-11-21 03:28:43.136432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.788 [2024-11-21 03:28:43.136439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.788 [2024-11-21 03:28:43.136448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.788 [2024-11-21 03:28:43.136455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.788 [2024-11-21 03:28:43.136464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.788 [2024-11-21 03:28:43.136471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.788 [2024-11-21 03:28:43.136480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.788 [2024-11-21 03:28:43.136486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.788 [2024-11-21 03:28:43.136496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.788 [2024-11-21 03:28:43.136503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.788 [2024-11-21 03:28:43.136511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.788 [2024-11-21 03:28:43.136517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.788 [2024-11-21 03:28:43.136526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.788 [2024-11-21 03:28:43.136532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.788 [2024-11-21 03:28:43.136540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.788 [2024-11-21 03:28:43.136547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.788 [2024-11-21 03:28:43.136555] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.788 [2024-11-21 03:28:43.136563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.788 [2024-11-21 03:28:43.136572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.788 [2024-11-21 03:28:43.136579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.788 [2024-11-21 03:28:43.136588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.788 [2024-11-21 03:28:43.136596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.788 [2024-11-21 03:28:43.136604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.788 [2024-11-21 03:28:43.136611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.788 [2024-11-21 03:28:43.136622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.788 [2024-11-21 03:28:43.136629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.788 [2024-11-21 03:28:43.136640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.788 [2024-11-21 03:28:43.136650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.788 [2024-11-21 03:28:43.136673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.788 [2024-11-21 03:28:43.136682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.788 [2024-11-21 03:28:43.136690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.788 [2024-11-21 03:28:43.136699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.788 [2024-11-21 03:28:43.136706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.788 [2024-11-21 03:28:43.136715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.788 [2024-11-21 03:28:43.136722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.788 [2024-11-21 03:28:43.136732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.788 [2024-11-21 03:28:43.136739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.788 [2024-11-21 03:28:43.136783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.788 [2024-11-21 03:28:43.136796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.788 [2024-11-21 03:28:43.136813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.788 [2024-11-21 03:28:43.136822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.788 [2024-11-21 03:28:43.136830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.788 [2024-11-21 03:28:43.136839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.136847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.788 [2024-11-21 03:28:43.136857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:19:55.788 [2024-11-21 03:28:43.136865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.150175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.150222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.788 [2024-11-21 03:28:43.150237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:19:55.788 [2024-11-21 03:28:43.150250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.150388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.150399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.788 [2024-11-21 03:28:43.150409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:55.788 [2024-11-21 03:28:43.150417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.163234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.163419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.788 [2024-11-21 03:28:43.163441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.792 ms 00:19:55.788 [2024-11-21 03:28:43.163453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.163528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.163538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.788 [2024-11-21 03:28:43.163554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:55.788 [2024-11-21 03:28:43.163562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.164123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.164152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.788 [2024-11-21 03:28:43.164165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:19:55.788 [2024-11-21 03:28:43.164174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.164341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.164352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.788 [2024-11-21 03:28:43.164366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:55.788 [2024-11-21 03:28:43.164376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.173005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.173042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.788 [2024-11-21 03:28:43.173055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.602 ms 00:19:55.788 [2024-11-21 03:28:43.173064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.176937] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.788 [2024-11-21 03:28:43.176982] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.788 [2024-11-21 03:28:43.176998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.177006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.788 [2024-11-21 03:28:43.177018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:19:55.788 [2024-11-21 03:28:43.177025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.193264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.193329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.788 [2024-11-21 03:28:43.193348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.171 ms 00:19:55.788 [2024-11-21 03:28:43.193361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.196431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.196482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.788 [2024-11-21 03:28:43.196495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:19:55.788 [2024-11-21 03:28:43.196502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.199341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.199389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.788 [2024-11-21 03:28:43.199402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.779 ms 00:19:55.788 [2024-11-21 03:28:43.199409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.199755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.788 [2024-11-21 03:28:43.199767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.788 [2024-11-21 03:28:43.199779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:19:55.788 [2024-11-21 03:28:43.199786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.788 [2024-11-21 03:28:43.238146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.238214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.789 [2024-11-21 03:28:43.238235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.331 ms 00:19:55.789 [2024-11-21 03:28:43.238244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.246717] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.789 [2024-11-21 03:28:43.265750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.265812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.789 [2024-11-21 03:28:43.265825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.407 ms 00:19:55.789 [2024-11-21 03:28:43.265836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.265956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.265975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.789 [2024-11-21 03:28:43.265985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:55.789 [2024-11-21 03:28:43.266008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.266068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.266080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.789 [2024-11-21 03:28:43.266089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:55.789 [2024-11-21 03:28:43.266114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.266140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.266154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.789 [2024-11-21 03:28:43.266165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.789 [2024-11-21 03:28:43.266177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.266214] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.789 [2024-11-21 03:28:43.266232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.266239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.789 [2024-11-21 03:28:43.266249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:55.789 [2024-11-21 03:28:43.266256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.272476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.272527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.789 [2024-11-21 03:28:43.272541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:19:55.789 [2024-11-21 03:28:43.272552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.272645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.789 [2024-11-21 03:28:43.272654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.789 [2024-11-21 03:28:43.272666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:55.789 [2024-11-21 03:28:43.272673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.789 [2024-11-21 03:28:43.273849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.789 [2024-11-21 03:28:43.275233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.269 ms, result 0 00:19:55.789 [2024-11-21 03:28:43.277542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.789 Some configs were skipped because the RPC state that can call them passed over. 00:19:55.789 03:28:43 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:56.050 [2024-11-21 03:28:43.514893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.050 [2024-11-21 03:28:43.514974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:56.050 [2024-11-21 03:28:43.514988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:19:56.050 [2024-11-21 03:28:43.515000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.050 [2024-11-21 03:28:43.515037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.210 ms, result 0 00:19:56.050 true 00:19:56.050 03:28:43 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:56.311 [2024-11-21 03:28:43.730935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.311 [2024-11-21 03:28:43.730993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:56.311 [2024-11-21 03:28:43.731009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:19:56.311 [2024-11-21 03:28:43.731018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.311 [2024-11-21 03:28:43.731059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.936 ms, result 0 00:19:56.311 true 00:19:56.311 03:28:43 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89348 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89348 ']' 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89348 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89348 00:19:56.311 killing process with pid 89348 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89348' 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89348 00:19:56.311 03:28:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89348 00:19:56.573 [2024-11-21 03:28:43.901287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.901336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.573 [2024-11-21 03:28:43.901349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.573 [2024-11-21 03:28:43.901360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.901382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:56.573 [2024-11-21 03:28:43.901864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.901880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.573 [2024-11-21 03:28:43.901894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:19:56.573 [2024-11-21 03:28:43.901922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.902230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.902251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.573 [2024-11-21 03:28:43.902264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:56.573 [2024-11-21 03:28:43.902272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.906801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.906834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.573 [2024-11-21 03:28:43.906846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:19:56.573 [2024-11-21 03:28:43.906856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.913809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.913841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.573 [2024-11-21 03:28:43.913855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:19:56.573 [2024-11-21 03:28:43.913862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.916308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.916450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.573 [2024-11-21 03:28:43.916470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.360 ms 00:19:56.573 [2024-11-21 03:28:43.916477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.920539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.920643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.573 [2024-11-21 03:28:43.920698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:19:56.573 [2024-11-21 03:28:43.920723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.920920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.920962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.573 [2024-11-21 03:28:43.920985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:56.573 [2024-11-21 03:28:43.921040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.924051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.924164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.573 [2024-11-21 03:28:43.924217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:56.573 [2024-11-21 03:28:43.924238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.926790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.926915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.573 [2024-11-21 03:28:43.926972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:19:56.573 [2024-11-21 03:28:43.926994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.928970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.929078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.573 [2024-11-21 03:28:43.929129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:19:56.573 [2024-11-21 03:28:43.929150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.930888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.573 [2024-11-21 03:28:43.931047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.573 [2024-11-21 03:28:43.931112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:19:56.573 [2024-11-21 03:28:43.931136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.573 [2024-11-21 03:28:43.931240] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.573 [2024-11-21 03:28:43.931282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.931960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.573 [2024-11-21 03:28:43.932235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.932981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.933937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.934973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.574 [2024-11-21 03:28:43.935184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.574 [2024-11-21 03:28:43.935193] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:19:56.574 [2024-11-21 03:28:43.935201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.574 [2024-11-21 03:28:43.935213] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.574 [2024-11-21 03:28:43.935219] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.574 [2024-11-21 03:28:43.935229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.574 [2024-11-21 03:28:43.935236] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.574 [2024-11-21 03:28:43.935253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.574 [2024-11-21 03:28:43.935260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.574 [2024-11-21 03:28:43.935268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.575 [2024-11-21 03:28:43.935275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.575 [2024-11-21 03:28:43.935284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.575 [2024-11-21 03:28:43.935292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.575 [2024-11-21 03:28:43.935306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.048 ms 00:19:56.575 [2024-11-21 03:28:43.935313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.936977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.575 [2024-11-21 03:28:43.937010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.575 [2024-11-21 03:28:43.937021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:19:56.575 [2024-11-21 03:28:43.937028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.937116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.575 [2024-11-21 03:28:43.937124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.575 [2024-11-21 03:28:43.937134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:56.575 [2024-11-21 03:28:43.937141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.943049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.943084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.575 [2024-11-21 03:28:43.943096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.943103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.943172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.943181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.575 [2024-11-21 03:28:43.943193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.943201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.943250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.943259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.575 [2024-11-21 03:28:43.943268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.943275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.943294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.943302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.575 [2024-11-21 03:28:43.943311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.943318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.953734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.953775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.575 [2024-11-21 03:28:43.953787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.953794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.961696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.961738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.575 [2024-11-21 03:28:43.961752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.961765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.961809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.961821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.575 [2024-11-21 03:28:43.961831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.961838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.961870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.961878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.575 [2024-11-21 03:28:43.961888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.961907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.961979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.961999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.575 [2024-11-21 03:28:43.962008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.962016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.962049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.962062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.575 [2024-11-21 03:28:43.962073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.962080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.962121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.962131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.575 [2024-11-21 03:28:43.962143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.962150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.962198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.575 [2024-11-21 03:28:43.962207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.575 [2024-11-21 03:28:43.962217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.575 [2024-11-21 03:28:43.962227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.575 [2024-11-21 03:28:43.962363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.050 ms, result 0 00:19:56.836 03:28:44 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:56.836 03:28:44 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:56.836 [2024-11-21 03:28:44.223720] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:19:56.836 [2024-11-21 03:28:44.223864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89384 ] 00:19:56.836 [2024-11-21 03:28:44.358553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:56.836 [2024-11-21 03:28:44.388844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.096 [2024-11-21 03:28:44.418166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.096 [2024-11-21 03:28:44.528402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.096 [2024-11-21 03:28:44.528487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.359 [2024-11-21 03:28:44.689618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.359 [2024-11-21 03:28:44.689686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.359 [2024-11-21 03:28:44.689700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.360 [2024-11-21 03:28:44.689709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.692321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.692374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.360 [2024-11-21 03:28:44.692389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:19:57.360 [2024-11-21 03:28:44.692397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.692513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.360 [2024-11-21 03:28:44.692781] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.360 [2024-11-21 03:28:44.692800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.692809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.360 [2024-11-21 03:28:44.692819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:19:57.360 [2024-11-21 03:28:44.692827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.694795] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.360 [2024-11-21 03:28:44.698587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.698644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.360 [2024-11-21 03:28:44.698656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.795 ms 00:19:57.360 [2024-11-21 03:28:44.698664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.698752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.698764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.360 [2024-11-21 03:28:44.698773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:57.360 [2024-11-21 03:28:44.698781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.707131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.707180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.360 [2024-11-21 03:28:44.707191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.305 ms 00:19:57.360 [2024-11-21 03:28:44.707202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.707336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.707349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.360 [2024-11-21 03:28:44.707358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:57.360 [2024-11-21 03:28:44.707366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.707398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.707407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.360 [2024-11-21 03:28:44.707414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:57.360 [2024-11-21 03:28:44.707422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.707445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.360 [2024-11-21 03:28:44.709519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.709556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.360 [2024-11-21 03:28:44.709567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:19:57.360 [2024-11-21 03:28:44.709580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.709623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.709631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.360 [2024-11-21 03:28:44.709644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:57.360 [2024-11-21 03:28:44.709651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.709670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.360 [2024-11-21 03:28:44.709692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:57.360 [2024-11-21 03:28:44.709729] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.360 [2024-11-21 03:28:44.709746] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:57.360 [2024-11-21 03:28:44.709851] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:57.360 [2024-11-21 03:28:44.709863] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.360 [2024-11-21 03:28:44.709874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:57.360 [2024-11-21 03:28:44.709884] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.360 [2024-11-21 03:28:44.709894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.360 [2024-11-21 03:28:44.709923] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.360 [2024-11-21 03:28:44.709931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.360 [2024-11-21 03:28:44.709942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:57.360 [2024-11-21 03:28:44.709957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:57.360 [2024-11-21 03:28:44.709968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.709975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.360 [2024-11-21 03:28:44.709984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:57.360 [2024-11-21 03:28:44.710008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.710102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-21 03:28:44.710112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.360 [2024-11-21 03:28:44.710119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:57.360 [2024-11-21 03:28:44.710127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-21 03:28:44.710232] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.360 [2024-11-21 03:28:44.710246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.360 [2024-11-21 03:28:44.710256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.360 [2024-11-21 03:28:44.710282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.360 [2024-11-21 03:28:44.710318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.360 [2024-11-21 03:28:44.710335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.360 [2024-11-21 03:28:44.710343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.360 [2024-11-21 03:28:44.710350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.360 [2024-11-21 03:28:44.710358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.360 [2024-11-21 03:28:44.710366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:57.360 [2024-11-21 03:28:44.710374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.360 [2024-11-21 03:28:44.710391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.360 [2024-11-21 03:28:44.710418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.360 [2024-11-21 03:28:44.710446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.360 [2024-11-21 03:28:44.710470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.360 [2024-11-21 03:28:44.710493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.360 [2024-11-21 03:28:44.710508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.360 [2024-11-21 03:28:44.710516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.360 [2024-11-21 03:28:44.710531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.360 [2024-11-21 03:28:44.710538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:57.360 [2024-11-21 03:28:44.710544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.360 [2024-11-21 03:28:44.710551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:57.360 [2024-11-21 03:28:44.710559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:57.360 [2024-11-21 03:28:44.710565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.360 [2024-11-21 03:28:44.710572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:57.361 [2024-11-21 03:28:44.710579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:57.361 [2024-11-21 03:28:44.710585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.361 [2024-11-21 03:28:44.710592] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.361 [2024-11-21 03:28:44.710600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.361 [2024-11-21 03:28:44.710608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.361 [2024-11-21 03:28:44.710616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.361 [2024-11-21 03:28:44.710623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.361 [2024-11-21 03:28:44.710630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.361 [2024-11-21 03:28:44.710637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.361 [2024-11-21 03:28:44.710644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.361 [2024-11-21 03:28:44.710651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.361 [2024-11-21 03:28:44.710658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.361 [2024-11-21 03:28:44.710667] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.361 [2024-11-21 03:28:44.710678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.361 [2024-11-21 03:28:44.710694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:57.361 [2024-11-21 03:28:44.710701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.361 [2024-11-21 03:28:44.710708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:57.361 [2024-11-21 03:28:44.710715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:57.361 [2024-11-21 03:28:44.710722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:57.361 [2024-11-21 03:28:44.710729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:57.361 [2024-11-21 03:28:44.710736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:57.361 [2024-11-21 03:28:44.710743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:57.361 [2024-11-21 03:28:44.710751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:57.361 [2024-11-21 03:28:44.710787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.361 [2024-11-21 03:28:44.710799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.361 [2024-11-21 03:28:44.710815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.361 [2024-11-21 03:28:44.710822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.361 [2024-11-21 03:28:44.710829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.361 [2024-11-21 03:28:44.710837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.710844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.361 [2024-11-21 03:28:44.710853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:19:57.361 [2024-11-21 03:28:44.710860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.724456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.724514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.361 [2024-11-21 03:28:44.724528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.531 ms 00:19:57.361 [2024-11-21 03:28:44.724537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.724677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.724695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.361 [2024-11-21 03:28:44.724705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:57.361 [2024-11-21 03:28:44.724714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.748678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.749014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.361 [2024-11-21 03:28:44.749052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.937 ms 00:19:57.361 [2024-11-21 03:28:44.749080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.749240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.749264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.361 [2024-11-21 03:28:44.749282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.361 [2024-11-21 03:28:44.749297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.749894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.749982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.361 [2024-11-21 03:28:44.750035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:19:57.361 [2024-11-21 03:28:44.750058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.750321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.750352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.361 [2024-11-21 03:28:44.750369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:19:57.361 [2024-11-21 03:28:44.750384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.759703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.759761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.361 [2024-11-21 03:28:44.759772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.277 ms 00:19:57.361 [2024-11-21 03:28:44.759780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.763857] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:57.361 [2024-11-21 03:28:44.764066] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.361 [2024-11-21 03:28:44.764084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.764093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.361 [2024-11-21 03:28:44.764102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:19:57.361 [2024-11-21 03:28:44.764110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.780340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.780400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.361 [2024-11-21 03:28:44.780417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.159 ms 00:19:57.361 [2024-11-21 03:28:44.780425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.783666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.783842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.361 [2024-11-21 03:28:44.783860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:19:57.361 [2024-11-21 03:28:44.783869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.786450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.786500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.361 [2024-11-21 03:28:44.786510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:19:57.361 [2024-11-21 03:28:44.786518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.786876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.786888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.361 [2024-11-21 03:28:44.787037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:57.361 [2024-11-21 03:28:44.787084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-21 03:28:44.811785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-21 03:28:44.812019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.361 [2024-11-21 03:28:44.812189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:19:57.361 [2024-11-21 03:28:44.812229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.820567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.362 [2024-11-21 03:28:44.839072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.839240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.362 [2024-11-21 03:28:44.839295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.737 ms 00:19:57.362 [2024-11-21 03:28:44.839319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.839424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.839453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.362 [2024-11-21 03:28:44.839477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:57.362 [2024-11-21 03:28:44.839496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.839568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.839663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.362 [2024-11-21 03:28:44.839689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:57.362 [2024-11-21 03:28:44.839709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.839750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.839780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.362 [2024-11-21 03:28:44.839801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.362 [2024-11-21 03:28:44.839827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.839955] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.362 [2024-11-21 03:28:44.839987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.840007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.362 [2024-11-21 03:28:44.840143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:57.362 [2024-11-21 03:28:44.840218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.846020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.846172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.362 [2024-11-21 03:28:44.846227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:19:57.362 [2024-11-21 03:28:44.846249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.846692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-21 03:28:44.846784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.362 [2024-11-21 03:28:44.846812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:57.362 [2024-11-21 03:28:44.846938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-21 03:28:44.848065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.362 [2024-11-21 03:28:44.849579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.113 ms, result 0 00:19:57.362 [2024-11-21 03:28:44.851039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.362 [2024-11-21 03:28:44.858416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.305  [2024-11-21T03:28:47.328Z] Copying: 21/256 [MB] (21 MBps) [2024-11-21T03:28:47.906Z] Copying: 37/256 [MB] (15 MBps) [2024-11-21T03:28:49.294Z] Copying: 54/256 [MB] (16 MBps) [2024-11-21T03:28:49.866Z] Copying: 73/256 [MB] (19 MBps) [2024-11-21T03:28:51.251Z] Copying: 86/256 [MB] (12 MBps) [2024-11-21T03:28:52.194Z] Copying: 96/256 [MB] (10 MBps) [2024-11-21T03:28:53.137Z] Copying: 114/256 [MB] (17 MBps) [2024-11-21T03:28:54.081Z] Copying: 128/256 [MB] (14 MBps) [2024-11-21T03:28:55.026Z] Copying: 147/256 [MB] (18 MBps) [2024-11-21T03:28:55.967Z] Copying: 158/256 [MB] (11 MBps) [2024-11-21T03:28:56.911Z] Copying: 177/256 [MB] (18 MBps) [2024-11-21T03:28:58.293Z] Copying: 190/256 [MB] (13 MBps) [2024-11-21T03:28:58.866Z] Copying: 206/256 [MB] (15 MBps) [2024-11-21T03:29:00.254Z] Copying: 227/256 [MB] (21 MBps) [2024-11-21T03:29:01.199Z] Copying: 238/256 [MB] (10 MBps) [2024-11-21T03:29:01.199Z] Copying: 253/256 [MB] (15 MBps) [2024-11-21T03:29:01.199Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-21 03:29:00.976180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.634 [2024-11-21 03:29:00.977179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.977204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.634 [2024-11-21 03:29:00.977214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:13.634 [2024-11-21 03:29:00.977221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.977238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.634 [2024-11-21 03:29:00.977598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.977614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.634 [2024-11-21 03:29:00.977621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:13.634 [2024-11-21 03:29:00.977632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.977824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.977831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.634 [2024-11-21 03:29:00.977839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:20:13.634 [2024-11-21 03:29:00.977845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.980672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.980689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.634 [2024-11-21 03:29:00.980696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:20:13.634 [2024-11-21 03:29:00.980703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.985996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.986017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.634 [2024-11-21 03:29:00.986025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.279 ms 00:20:13.634 [2024-11-21 03:29:00.986035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.987406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.987513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.634 [2024-11-21 03:29:00.987532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:20:13.634 [2024-11-21 03:29:00.987538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.990663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.990762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.634 [2024-11-21 03:29:00.990774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:20:13.634 [2024-11-21 03:29:00.990780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.990870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.990878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.634 [2024-11-21 03:29:00.990884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:13.634 [2024-11-21 03:29:00.990894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.992864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.634 [2024-11-21 03:29:00.992890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.634 [2024-11-21 03:29:00.992905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:20:13.634 [2024-11-21 03:29:00.992910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.634 [2024-11-21 03:29:00.994440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-11-21 03:29:00.994465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.635 [2024-11-21 03:29:00.994471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:20:13.635 [2024-11-21 03:29:00.994476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-11-21 03:29:00.995654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-11-21 03:29:00.995678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.635 [2024-11-21 03:29:00.995685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:20:13.635 [2024-11-21 03:29:00.995690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-11-21 03:29:00.996758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-11-21 03:29:00.996783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.635 [2024-11-21 03:29:00.996789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:20:13.635 [2024-11-21 03:29:00.996794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-11-21 03:29:00.996817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.635 [2024-11-21 03:29:00.996827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.996995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.635 [2024-11-21 03:29:00.997278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.636 [2024-11-21 03:29:00.997419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.636 [2024-11-21 03:29:00.997431] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:20:13.636 [2024-11-21 03:29:00.997437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.636 [2024-11-21 03:29:00.997447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.636 [2024-11-21 03:29:00.997452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.636 [2024-11-21 03:29:00.997460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.636 [2024-11-21 03:29:00.997465] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.636 [2024-11-21 03:29:00.997471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.636 [2024-11-21 03:29:00.997479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.636 [2024-11-21 03:29:00.997484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.636 [2024-11-21 03:29:00.997488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.636 [2024-11-21 03:29:00.997493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.636 [2024-11-21 03:29:00.997499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.636 [2024-11-21 03:29:00.997505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:13.636 [2024-11-21 03:29:00.997510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:00.998688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.636 [2024-11-21 03:29:00.998705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.636 [2024-11-21 03:29:00.998712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:20:13.636 [2024-11-21 03:29:00.998717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:00.998785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.636 [2024-11-21 03:29:00.998791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.636 [2024-11-21 03:29:00.998797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:13.636 [2024-11-21 03:29:00.998802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.003087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.003112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.636 [2024-11-21 03:29:01.003119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.003128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.003167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.003177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.636 [2024-11-21 03:29:01.003185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.003190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.003217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.003224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.636 [2024-11-21 03:29:01.003230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.003235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.003249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.003255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.636 [2024-11-21 03:29:01.003261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.003267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.010611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.010642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.636 [2024-11-21 03:29:01.010650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.010656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.016613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.016754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.636 [2024-11-21 03:29:01.016765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.016777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.016797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.016804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.636 [2024-11-21 03:29:01.016810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.016816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.016838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.016849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.636 [2024-11-21 03:29:01.016855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.016861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.016928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.016936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.636 [2024-11-21 03:29:01.016945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.016951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.016976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.016984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.636 [2024-11-21 03:29:01.016992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.016997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.017025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.017032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.636 [2024-11-21 03:29:01.017038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.017044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.017079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.636 [2024-11-21 03:29:01.017088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.636 [2024-11-21 03:29:01.017094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.636 [2024-11-21 03:29:01.017100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.636 [2024-11-21 03:29:01.017198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.008 ms, result 0 00:20:13.636 00:20:13.636 00:20:13.636 03:29:01 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:13.636 03:29:01 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:14.207 03:29:01 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.466 [2024-11-21 03:29:01.831117] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:20:14.466 [2024-11-21 03:29:01.831258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89572 ] 00:20:14.466 [2024-11-21 03:29:01.967130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.466 [2024-11-21 03:29:01.995297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.467 [2024-11-21 03:29:02.018525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.728 [2024-11-21 03:29:02.103827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.728 [2024-11-21 03:29:02.103886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.728 [2024-11-21 03:29:02.245965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.728 [2024-11-21 03:29:02.246011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.728 [2024-11-21 03:29:02.246022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.728 [2024-11-21 03:29:02.246028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.728 [2024-11-21 03:29:02.247721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.728 [2024-11-21 03:29:02.247867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.728 [2024-11-21 03:29:02.247885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:20:14.728 [2024-11-21 03:29:02.247890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.728 [2024-11-21 03:29:02.247959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.728 [2024-11-21 03:29:02.248134] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.728 [2024-11-21 03:29:02.248147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.728 [2024-11-21 03:29:02.248156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.728 [2024-11-21 03:29:02.248163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:20:14.728 [2024-11-21 03:29:02.248168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.728 [2024-11-21 03:29:02.249095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.728 [2024-11-21 03:29:02.251033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.728 [2024-11-21 03:29:02.251060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.728 [2024-11-21 03:29:02.251071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:20:14.728 [2024-11-21 03:29:02.251077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.728 [2024-11-21 03:29:02.251127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.728 [2024-11-21 03:29:02.251134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.729 [2024-11-21 03:29:02.251144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:14.729 [2024-11-21 03:29:02.251150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.255379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.255402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.729 [2024-11-21 03:29:02.255411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:20:14.729 [2024-11-21 03:29:02.255417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.255504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.255513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.729 [2024-11-21 03:29:02.255522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:14.729 [2024-11-21 03:29:02.255530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.255549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.255555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.729 [2024-11-21 03:29:02.255561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.729 [2024-11-21 03:29:02.255567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.255581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.729 [2024-11-21 03:29:02.256691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.256713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.729 [2024-11-21 03:29:02.256724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:20:14.729 [2024-11-21 03:29:02.256732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.256758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.256769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.729 [2024-11-21 03:29:02.256775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.729 [2024-11-21 03:29:02.256780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.256794] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.729 [2024-11-21 03:29:02.256807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.729 [2024-11-21 03:29:02.256835] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.729 [2024-11-21 03:29:02.256848] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:14.729 [2024-11-21 03:29:02.256941] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.729 [2024-11-21 03:29:02.256950] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.729 [2024-11-21 03:29:02.256958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.729 [2024-11-21 03:29:02.256965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.729 [2024-11-21 03:29:02.256976] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.729 [2024-11-21 03:29:02.256982] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.729 [2024-11-21 03:29:02.256988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.729 [2024-11-21 03:29:02.257002] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.729 [2024-11-21 03:29:02.257007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.729 [2024-11-21 03:29:02.257016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.257021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.729 [2024-11-21 03:29:02.257027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:14.729 [2024-11-21 03:29:02.257033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.257101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.729 [2024-11-21 03:29:02.257107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.729 [2024-11-21 03:29:02.257113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:14.729 [2024-11-21 03:29:02.257118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.729 [2024-11-21 03:29:02.257193] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.729 [2024-11-21 03:29:02.257203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.729 [2024-11-21 03:29:02.257209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.729 [2024-11-21 03:29:02.257226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.729 [2024-11-21 03:29:02.257247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.729 [2024-11-21 03:29:02.257257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.729 [2024-11-21 03:29:02.257262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.729 [2024-11-21 03:29:02.257266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.729 [2024-11-21 03:29:02.257271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.729 [2024-11-21 03:29:02.257277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:14.729 [2024-11-21 03:29:02.257282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.729 [2024-11-21 03:29:02.257293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.729 [2024-11-21 03:29:02.257308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.729 [2024-11-21 03:29:02.257325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.729 [2024-11-21 03:29:02.257341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.729 [2024-11-21 03:29:02.257358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.729 [2024-11-21 03:29:02.257375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.729 [2024-11-21 03:29:02.257386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.729 [2024-11-21 03:29:02.257392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:14.729 [2024-11-21 03:29:02.257397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.729 [2024-11-21 03:29:02.257403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.729 [2024-11-21 03:29:02.257409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:14.729 [2024-11-21 03:29:02.257416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.729 [2024-11-21 03:29:02.257428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:14.729 [2024-11-21 03:29:02.257434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257440] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.729 [2024-11-21 03:29:02.257446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.729 [2024-11-21 03:29:02.257452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.729 [2024-11-21 03:29:02.257466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.729 [2024-11-21 03:29:02.257473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.729 [2024-11-21 03:29:02.257479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.729 [2024-11-21 03:29:02.257484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.729 [2024-11-21 03:29:02.257490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.729 [2024-11-21 03:29:02.257496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.729 [2024-11-21 03:29:02.257502] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.729 [2024-11-21 03:29:02.257510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.729 [2024-11-21 03:29:02.257520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.729 [2024-11-21 03:29:02.257527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:14.729 [2024-11-21 03:29:02.257534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.730 [2024-11-21 03:29:02.257541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:14.730 [2024-11-21 03:29:02.257547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:14.730 [2024-11-21 03:29:02.257552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:14.730 [2024-11-21 03:29:02.257559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:14.730 [2024-11-21 03:29:02.257565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:14.730 [2024-11-21 03:29:02.257571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:14.730 [2024-11-21 03:29:02.257577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:14.730 [2024-11-21 03:29:02.257607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.730 [2024-11-21 03:29:02.257619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.730 [2024-11-21 03:29:02.257635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.730 [2024-11-21 03:29:02.257642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.730 [2024-11-21 03:29:02.257648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.730 [2024-11-21 03:29:02.257654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.257661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.730 [2024-11-21 03:29:02.257667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:14.730 [2024-11-21 03:29:02.257673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.265246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.265271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.730 [2024-11-21 03:29:02.265279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.536 ms 00:20:14.730 [2024-11-21 03:29:02.265285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.265363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.265374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.730 [2024-11-21 03:29:02.265380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:14.730 [2024-11-21 03:29:02.265388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.280981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.281099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.730 [2024-11-21 03:29:02.281113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.575 ms 00:20:14.730 [2024-11-21 03:29:02.281119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.281183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.281192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.730 [2024-11-21 03:29:02.281203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:14.730 [2024-11-21 03:29:02.281208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.281488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.281499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.730 [2024-11-21 03:29:02.281506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:14.730 [2024-11-21 03:29:02.281512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.281614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.281623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.730 [2024-11-21 03:29:02.281629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:14.730 [2024-11-21 03:29:02.281635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.730 [2024-11-21 03:29:02.287427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.730 [2024-11-21 03:29:02.287467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.730 [2024-11-21 03:29:02.287480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.776 ms 00:20:14.730 [2024-11-21 03:29:02.287490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.290131] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:14.993 [2024-11-21 03:29:02.290173] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.993 [2024-11-21 03:29:02.290188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.290199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.993 [2024-11-21 03:29:02.290210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:20:14.993 [2024-11-21 03:29:02.290220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.302646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.302670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.993 [2024-11-21 03:29:02.302679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.372 ms 00:20:14.993 [2024-11-21 03:29:02.302686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.304427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.304522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.993 [2024-11-21 03:29:02.304534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:20:14.993 [2024-11-21 03:29:02.304539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.305808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.305829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.993 [2024-11-21 03:29:02.305840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:20:14.993 [2024-11-21 03:29:02.305845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.306103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.306116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.993 [2024-11-21 03:29:02.306127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:14.993 [2024-11-21 03:29:02.306132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.321106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.321138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.993 [2024-11-21 03:29:02.321147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.957 ms 00:20:14.993 [2024-11-21 03:29:02.321153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.326878] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.993 [2024-11-21 03:29:02.338205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.338318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.993 [2024-11-21 03:29:02.338331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.007 ms 00:20:14.993 [2024-11-21 03:29:02.338338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.338414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.338423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.993 [2024-11-21 03:29:02.338432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.993 [2024-11-21 03:29:02.338437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.338473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.338479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.993 [2024-11-21 03:29:02.338489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:14.993 [2024-11-21 03:29:02.338494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.338513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.338519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.993 [2024-11-21 03:29:02.338525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.993 [2024-11-21 03:29:02.338533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.338555] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.993 [2024-11-21 03:29:02.338562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.338568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.993 [2024-11-21 03:29:02.338574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.993 [2024-11-21 03:29:02.338579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.341649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.341743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.993 [2024-11-21 03:29:02.341755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:20:14.993 [2024-11-21 03:29:02.341761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.341821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.341833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.993 [2024-11-21 03:29:02.341840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:14.993 [2024-11-21 03:29:02.341845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.342594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.993 [2024-11-21 03:29:02.343403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.375 ms, result 0 00:20:14.993 [2024-11-21 03:29:02.344301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.993 [2024-11-21 03:29:02.351764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.993  [2024-11-21T03:29:02.558Z] Copying: 4096/4096 [kB] (average 39 MBps)[2024-11-21 03:29:02.454829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.993 [2024-11-21 03:29:02.455364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.455393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.993 [2024-11-21 03:29:02.455401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:14.993 [2024-11-21 03:29:02.455407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.455423] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:14.993 [2024-11-21 03:29:02.455776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.455792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.993 [2024-11-21 03:29:02.455800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:20:14.993 [2024-11-21 03:29:02.455806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.457219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.457246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.993 [2024-11-21 03:29:02.457257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:20:14.993 [2024-11-21 03:29:02.457262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.460164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.460266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.993 [2024-11-21 03:29:02.460277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:20:14.993 [2024-11-21 03:29:02.460283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.465633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.465658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.993 [2024-11-21 03:29:02.465666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.328 ms 00:20:14.993 [2024-11-21 03:29:02.465676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.993 [2024-11-21 03:29:02.466793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.993 [2024-11-21 03:29:02.466821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.993 [2024-11-21 03:29:02.466834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:20:14.993 [2024-11-21 03:29:02.466840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.470067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.470100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.994 [2024-11-21 03:29:02.470106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:20:14.994 [2024-11-21 03:29:02.470112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.470210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.470217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.994 [2024-11-21 03:29:02.470223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:14.994 [2024-11-21 03:29:02.470231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.472058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.472085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:14.994 [2024-11-21 03:29:02.472093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:20:14.994 [2024-11-21 03:29:02.472099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.473440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.473464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:14.994 [2024-11-21 03:29:02.473473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:20:14.994 [2024-11-21 03:29:02.473479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.474504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.474529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:14.994 [2024-11-21 03:29:02.474536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:20:14.994 [2024-11-21 03:29:02.474541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.475626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.994 [2024-11-21 03:29:02.475651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:14.994 [2024-11-21 03:29:02.475657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:20:14.994 [2024-11-21 03:29:02.475662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.994 [2024-11-21 03:29:02.475685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:14.994 [2024-11-21 03:29:02.475700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.475997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:14.994 [2024-11-21 03:29:02.476122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:14.995 [2024-11-21 03:29:02.476302] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:14.995 [2024-11-21 03:29:02.476312] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:20:14.995 [2024-11-21 03:29:02.476318] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:14.995 [2024-11-21 03:29:02.476323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:14.995 [2024-11-21 03:29:02.476328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:14.995 [2024-11-21 03:29:02.476334] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:14.995 [2024-11-21 03:29:02.476339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:14.995 [2024-11-21 03:29:02.476345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:14.995 [2024-11-21 03:29:02.476353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:14.995 [2024-11-21 03:29:02.476358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:14.995 [2024-11-21 03:29:02.476363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:14.995 [2024-11-21 03:29:02.476368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.995 [2024-11-21 03:29:02.476373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:14.995 [2024-11-21 03:29:02.476380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:20:14.995 [2024-11-21 03:29:02.476388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.477371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.995 [2024-11-21 03:29:02.477387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:14.995 [2024-11-21 03:29:02.477393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:20:14.995 [2024-11-21 03:29:02.477402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.477467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.995 [2024-11-21 03:29:02.477473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:14.995 [2024-11-21 03:29:02.477479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:14.995 [2024-11-21 03:29:02.477485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.481846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.481869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.995 [2024-11-21 03:29:02.481876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.481886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.481952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.481959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.995 [2024-11-21 03:29:02.481965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.481970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.482008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.482015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.995 [2024-11-21 03:29:02.482020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.482025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.482040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.482045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.995 [2024-11-21 03:29:02.482050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.482055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.489435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.489466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.995 [2024-11-21 03:29:02.489474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.489484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.995 [2024-11-21 03:29:02.495590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.995 [2024-11-21 03:29:02.495650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.995 [2024-11-21 03:29:02.495696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.995 [2024-11-21 03:29:02.495770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:14.995 [2024-11-21 03:29:02.495818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.995 [2024-11-21 03:29:02.495864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.495918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.995 [2024-11-21 03:29:02.495929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.995 [2024-11-21 03:29:02.495935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.995 [2024-11-21 03:29:02.495941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.995 [2024-11-21 03:29:02.496043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.662 ms, result 0 00:20:15.256 00:20:15.256 00:20:15.256 03:29:02 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89586 00:20:15.256 03:29:02 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89586 00:20:15.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89586 ']' 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.256 03:29:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:15.256 03:29:02 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:15.256 [2024-11-21 03:29:02.722850] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:20:15.256 [2024-11-21 03:29:02.722978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89586 ] 00:20:15.516 [2024-11-21 03:29:02.853191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:15.516 [2024-11-21 03:29:02.873095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.516 [2024-11-21 03:29:02.889234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.088 03:29:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.088 03:29:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:16.088 03:29:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:16.351 [2024-11-21 03:29:03.715846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.351 [2024-11-21 03:29:03.715891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.351 [2024-11-21 03:29:03.874978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.875131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.351 [2024-11-21 03:29:03.875153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.351 [2024-11-21 03:29:03.875161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.877413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.877446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.351 [2024-11-21 03:29:03.877457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:20:16.351 [2024-11-21 03:29:03.877465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.877534] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.351 [2024-11-21 03:29:03.877749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.351 [2024-11-21 03:29:03.877764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.877777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.351 [2024-11-21 03:29:03.877787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:16.351 [2024-11-21 03:29:03.877794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.879197] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.351 [2024-11-21 03:29:03.881631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.881764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.351 [2024-11-21 03:29:03.881781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:20:16.351 [2024-11-21 03:29:03.881791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.881855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.881869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.351 [2024-11-21 03:29:03.881877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:16.351 [2024-11-21 03:29:03.881886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.886743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.886776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.351 [2024-11-21 03:29:03.886786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:20:16.351 [2024-11-21 03:29:03.886796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.886878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.886889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.351 [2024-11-21 03:29:03.886914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:16.351 [2024-11-21 03:29:03.886927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.351 [2024-11-21 03:29:03.886957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.351 [2024-11-21 03:29:03.886967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.351 [2024-11-21 03:29:03.886977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.351 [2024-11-21 03:29:03.886985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.352 [2024-11-21 03:29:03.887009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.352 [2024-11-21 03:29:03.888335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.352 [2024-11-21 03:29:03.888450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.352 [2024-11-21 03:29:03.888470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:20:16.352 [2024-11-21 03:29:03.888478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.352 [2024-11-21 03:29:03.888514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.352 [2024-11-21 03:29:03.888525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.352 [2024-11-21 03:29:03.888536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:16.352 [2024-11-21 03:29:03.888547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.352 [2024-11-21 03:29:03.888568] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.352 [2024-11-21 03:29:03.888587] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.352 [2024-11-21 03:29:03.888636] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.352 [2024-11-21 03:29:03.888655] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.352 [2024-11-21 03:29:03.888758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.352 [2024-11-21 03:29:03.888768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.352 [2024-11-21 03:29:03.888780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.352 [2024-11-21 03:29:03.888790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.352 [2024-11-21 03:29:03.888801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.352 [2024-11-21 03:29:03.888808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.352 [2024-11-21 03:29:03.888817] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.352 [2024-11-21 03:29:03.888824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.352 [2024-11-21 03:29:03.888837] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.352 [2024-11-21 03:29:03.888844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.352 [2024-11-21 03:29:03.888852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.352 [2024-11-21 03:29:03.888860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:16.352 [2024-11-21 03:29:03.888868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.352 [2024-11-21 03:29:03.888970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.352 [2024-11-21 03:29:03.888981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.352 [2024-11-21 03:29:03.888988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:16.352 [2024-11-21 03:29:03.888996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.352 [2024-11-21 03:29:03.889098] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.352 [2024-11-21 03:29:03.889109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.352 [2024-11-21 03:29:03.889117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.352 [2024-11-21 03:29:03.889142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.352 [2024-11-21 03:29:03.889164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.352 [2024-11-21 03:29:03.889183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.352 [2024-11-21 03:29:03.889191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.352 [2024-11-21 03:29:03.889197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.352 [2024-11-21 03:29:03.889205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.352 [2024-11-21 03:29:03.889212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.352 [2024-11-21 03:29:03.889219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.352 [2024-11-21 03:29:03.889234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.352 [2024-11-21 03:29:03.889256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.352 [2024-11-21 03:29:03.889280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.352 [2024-11-21 03:29:03.889301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.352 [2024-11-21 03:29:03.889323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.352 [2024-11-21 03:29:03.889345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.352 [2024-11-21 03:29:03.889360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.352 [2024-11-21 03:29:03.889369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.352 [2024-11-21 03:29:03.889375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.352 [2024-11-21 03:29:03.889383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.352 [2024-11-21 03:29:03.889390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.352 [2024-11-21 03:29:03.889397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.352 [2024-11-21 03:29:03.889412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.352 [2024-11-21 03:29:03.889419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889426] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.352 [2024-11-21 03:29:03.889434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.352 [2024-11-21 03:29:03.889445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.352 [2024-11-21 03:29:03.889461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.352 [2024-11-21 03:29:03.889467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.352 [2024-11-21 03:29:03.889476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.352 [2024-11-21 03:29:03.889483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.352 [2024-11-21 03:29:03.889493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.352 [2024-11-21 03:29:03.889500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.352 [2024-11-21 03:29:03.889509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.352 [2024-11-21 03:29:03.889518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.352 [2024-11-21 03:29:03.889528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.352 [2024-11-21 03:29:03.889535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.352 [2024-11-21 03:29:03.889544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.352 [2024-11-21 03:29:03.889551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.352 [2024-11-21 03:29:03.889562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.352 [2024-11-21 03:29:03.889569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.352 [2024-11-21 03:29:03.889577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.352 [2024-11-21 03:29:03.889585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.352 [2024-11-21 03:29:03.889594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.352 [2024-11-21 03:29:03.889601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.352 [2024-11-21 03:29:03.889609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.352 [2024-11-21 03:29:03.889616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.352 [2024-11-21 03:29:03.889626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.352 [2024-11-21 03:29:03.889634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.353 [2024-11-21 03:29:03.889642] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.353 [2024-11-21 03:29:03.889652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.353 [2024-11-21 03:29:03.889661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.353 [2024-11-21 03:29:03.889668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.353 [2024-11-21 03:29:03.889678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.353 [2024-11-21 03:29:03.889685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.353 [2024-11-21 03:29:03.889695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.889702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.353 [2024-11-21 03:29:03.889710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:20:16.353 [2024-11-21 03:29:03.889717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.898589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.898623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.353 [2024-11-21 03:29:03.898633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.814 ms 00:20:16.353 [2024-11-21 03:29:03.898642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.898762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.898772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.353 [2024-11-21 03:29:03.898782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:16.353 [2024-11-21 03:29:03.898789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.907447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.907479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.353 [2024-11-21 03:29:03.907492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.638 ms 00:20:16.353 [2024-11-21 03:29:03.907499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.907540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.907548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.353 [2024-11-21 03:29:03.907564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:16.353 [2024-11-21 03:29:03.907570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.907890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.907926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.353 [2024-11-21 03:29:03.907942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:16.353 [2024-11-21 03:29:03.907950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.353 [2024-11-21 03:29:03.908086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.353 [2024-11-21 03:29:03.908099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.353 [2024-11-21 03:29:03.908109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:16.353 [2024-11-21 03:29:03.908117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.913520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.913549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.614 [2024-11-21 03:29:03.913560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.380 ms 00:20:16.614 [2024-11-21 03:29:03.913570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.916184] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:16.614 [2024-11-21 03:29:03.916219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.614 [2024-11-21 03:29:03.916232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.916240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.614 [2024-11-21 03:29:03.916250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:20:16.614 [2024-11-21 03:29:03.916257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.931226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.931269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.614 [2024-11-21 03:29:03.931285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.924 ms 00:20:16.614 [2024-11-21 03:29:03.931292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.933349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.933382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.614 [2024-11-21 03:29:03.933393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.977 ms 00:20:16.614 [2024-11-21 03:29:03.933400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.935186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.935216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.614 [2024-11-21 03:29:03.935227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:20:16.614 [2024-11-21 03:29:03.935233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.935544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.935554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.614 [2024-11-21 03:29:03.935564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:20:16.614 [2024-11-21 03:29:03.935572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.962274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.962321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.614 [2024-11-21 03:29:03.962339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.664 ms 00:20:16.614 [2024-11-21 03:29:03.962347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.969823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.614 [2024-11-21 03:29:03.984056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.984217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.614 [2024-11-21 03:29:03.984234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.633 ms 00:20:16.614 [2024-11-21 03:29:03.984247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.614 [2024-11-21 03:29:03.984342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.614 [2024-11-21 03:29:03.984354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.614 [2024-11-21 03:29:03.984363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.614 [2024-11-21 03:29:03.984373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.984424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-11-21 03:29:03.984434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.615 [2024-11-21 03:29:03.984442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:16.615 [2024-11-21 03:29:03.984451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.984474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-11-21 03:29:03.984487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.615 [2024-11-21 03:29:03.984495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.615 [2024-11-21 03:29:03.984504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.984533] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.615 [2024-11-21 03:29:03.984544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-11-21 03:29:03.984552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.615 [2024-11-21 03:29:03.984561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.615 [2024-11-21 03:29:03.984568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.988487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-11-21 03:29:03.988521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.615 [2024-11-21 03:29:03.988535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.893 ms 00:20:16.615 [2024-11-21 03:29:03.988542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.988628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-11-21 03:29:03.988638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.615 [2024-11-21 03:29:03.988648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:16.615 [2024-11-21 03:29:03.988655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-11-21 03:29:03.989434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.615 [2024-11-21 03:29:03.990442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.202 ms, result 0 00:20:16.615 [2024-11-21 03:29:03.992297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.615 Some configs were skipped because the RPC state that can call them passed over. 00:20:16.615 03:29:04 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:16.876 [2024-11-21 03:29:04.220144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.876 [2024-11-21 03:29:04.220289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:16.876 [2024-11-21 03:29:04.220347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:20:16.876 [2024-11-21 03:29:04.220374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.876 [2024-11-21 03:29:04.220424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.055 ms, result 0 00:20:16.876 true 00:20:16.876 03:29:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:16.876 [2024-11-21 03:29:04.432239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.876 [2024-11-21 03:29:04.432395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:16.876 [2024-11-21 03:29:04.432460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:20:16.876 [2024-11-21 03:29:04.432483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.876 [2024-11-21 03:29:04.432543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.896 ms, result 0 00:20:16.876 true 00:20:17.138 03:29:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89586 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89586 ']' 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89586 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89586 00:20:17.138 killing process with pid 89586 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89586' 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89586 00:20:17.138 03:29:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89586 00:20:17.138 [2024-11-21 03:29:04.607296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.138 [2024-11-21 03:29:04.607366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.138 [2024-11-21 03:29:04.607381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.138 [2024-11-21 03:29:04.607392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.138 [2024-11-21 03:29:04.607416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.138 [2024-11-21 03:29:04.608080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.138 [2024-11-21 03:29:04.608102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.138 [2024-11-21 03:29:04.608114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:20:17.138 [2024-11-21 03:29:04.608130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.138 [2024-11-21 03:29:04.608416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.138 [2024-11-21 03:29:04.608436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.138 [2024-11-21 03:29:04.608448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:17.138 [2024-11-21 03:29:04.608456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.138 [2024-11-21 03:29:04.613073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.138 [2024-11-21 03:29:04.613112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.138 [2024-11-21 03:29:04.613127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:20:17.138 [2024-11-21 03:29:04.613135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.138 [2024-11-21 03:29:04.620296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.138 [2024-11-21 03:29:04.620335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.138 [2024-11-21 03:29:04.620349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:20:17.138 [2024-11-21 03:29:04.620357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.623480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.623526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.139 [2024-11-21 03:29:04.623539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:20:17.139 [2024-11-21 03:29:04.623546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.629394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.629443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.139 [2024-11-21 03:29:04.629459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:20:17.139 [2024-11-21 03:29:04.629468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.629610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.629622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.139 [2024-11-21 03:29:04.629633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:17.139 [2024-11-21 03:29:04.629641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.633224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.633392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.139 [2024-11-21 03:29:04.633418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.557 ms 00:20:17.139 [2024-11-21 03:29:04.633425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.635995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.636037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.139 [2024-11-21 03:29:04.636049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:20:17.139 [2024-11-21 03:29:04.636056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.638196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.638241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.139 [2024-11-21 03:29:04.638253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:20:17.139 [2024-11-21 03:29:04.638260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.640436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.139 [2024-11-21 03:29:04.640480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.139 [2024-11-21 03:29:04.640492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:20:17.139 [2024-11-21 03:29:04.640499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.139 [2024-11-21 03:29:04.640543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.139 [2024-11-21 03:29:04.640563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.640999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.139 [2024-11-21 03:29:04.641152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.140 [2024-11-21 03:29:04.641473] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.140 [2024-11-21 03:29:04.641483] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:20:17.140 [2024-11-21 03:29:04.641494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.140 [2024-11-21 03:29:04.641503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.140 [2024-11-21 03:29:04.641511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.140 [2024-11-21 03:29:04.641521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.140 [2024-11-21 03:29:04.641531] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.140 [2024-11-21 03:29:04.641541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.140 [2024-11-21 03:29:04.641549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.140 [2024-11-21 03:29:04.641557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.140 [2024-11-21 03:29:04.641563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.140 [2024-11-21 03:29:04.641573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.140 [2024-11-21 03:29:04.641581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.140 [2024-11-21 03:29:04.641598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:20:17.140 [2024-11-21 03:29:04.641605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.643945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.140 [2024-11-21 03:29:04.643974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.140 [2024-11-21 03:29:04.643986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:20:17.140 [2024-11-21 03:29:04.643994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.644134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.140 [2024-11-21 03:29:04.644144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.140 [2024-11-21 03:29:04.644155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:17.140 [2024-11-21 03:29:04.644166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.651872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.651963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.140 [2024-11-21 03:29:04.651977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.651985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.652064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.652073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.140 [2024-11-21 03:29:04.652086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.652097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.652145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.652155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.140 [2024-11-21 03:29:04.652165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.652172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.652192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.652201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.140 [2024-11-21 03:29:04.652211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.652219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.666538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.666753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.140 [2024-11-21 03:29:04.666778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.666787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.140 [2024-11-21 03:29:04.678385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.140 [2024-11-21 03:29:04.678489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.140 [2024-11-21 03:29:04.678553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.140 [2024-11-21 03:29:04.678666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.140 [2024-11-21 03:29:04.678743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.140 [2024-11-21 03:29:04.678816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.140 [2024-11-21 03:29:04.678825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.140 [2024-11-21 03:29:04.678874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.140 [2024-11-21 03:29:04.678885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.140 [2024-11-21 03:29:04.678926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.141 [2024-11-21 03:29:04.678936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.141 [2024-11-21 03:29:04.679089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.764 ms, result 0 00:20:17.402 03:29:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.662 [2024-11-21 03:29:04.978962] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:20:17.662 [2024-11-21 03:29:04.979106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89622 ] 00:20:17.662 [2024-11-21 03:29:05.113636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:17.662 [2024-11-21 03:29:05.145035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.662 [2024-11-21 03:29:05.175350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.925 [2024-11-21 03:29:05.291547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.925 [2024-11-21 03:29:05.291629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.925 [2024-11-21 03:29:05.453270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.453495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.925 [2024-11-21 03:29:05.453519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.925 [2024-11-21 03:29:05.453529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.456065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.456117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.925 [2024-11-21 03:29:05.456128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:20:17.925 [2024-11-21 03:29:05.456136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.456241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.925 [2024-11-21 03:29:05.456503] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.925 [2024-11-21 03:29:05.456522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.456531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.925 [2024-11-21 03:29:05.456541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:17.925 [2024-11-21 03:29:05.456550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.458564] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.925 [2024-11-21 03:29:05.462434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.462606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.925 [2024-11-21 03:29:05.463045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 00:20:17.925 [2024-11-21 03:29:05.463098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.463481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.463670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.925 [2024-11-21 03:29:05.463691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:17.925 [2024-11-21 03:29:05.463699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.471678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.471724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.925 [2024-11-21 03:29:05.471735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.917 ms 00:20:17.925 [2024-11-21 03:29:05.471746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.471890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.471931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.925 [2024-11-21 03:29:05.471941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:17.925 [2024-11-21 03:29:05.471950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.471984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.472003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.925 [2024-11-21 03:29:05.472012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.925 [2024-11-21 03:29:05.472026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.472051] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.925 [2024-11-21 03:29:05.474124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.474165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.925 [2024-11-21 03:29:05.474177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:20:17.925 [2024-11-21 03:29:05.474191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.474233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.474242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.925 [2024-11-21 03:29:05.474250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.925 [2024-11-21 03:29:05.474258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.474280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.925 [2024-11-21 03:29:05.474300] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.925 [2024-11-21 03:29:05.474336] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.925 [2024-11-21 03:29:05.474359] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.925 [2024-11-21 03:29:05.474464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.925 [2024-11-21 03:29:05.474476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.925 [2024-11-21 03:29:05.474487] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.925 [2024-11-21 03:29:05.474498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.925 [2024-11-21 03:29:05.474507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.925 [2024-11-21 03:29:05.474516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.925 [2024-11-21 03:29:05.474523] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.925 [2024-11-21 03:29:05.474537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.925 [2024-11-21 03:29:05.474546] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.925 [2024-11-21 03:29:05.474556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.474563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.925 [2024-11-21 03:29:05.474572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:17.925 [2024-11-21 03:29:05.474579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.474666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.925 [2024-11-21 03:29:05.474675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.925 [2024-11-21 03:29:05.474683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:17.925 [2024-11-21 03:29:05.474694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.925 [2024-11-21 03:29:05.474797] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.925 [2024-11-21 03:29:05.474812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.925 [2024-11-21 03:29:05.474821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.925 [2024-11-21 03:29:05.474829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.925 [2024-11-21 03:29:05.474844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.925 [2024-11-21 03:29:05.474871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.925 [2024-11-21 03:29:05.474878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.925 [2024-11-21 03:29:05.474893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.925 [2024-11-21 03:29:05.474924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.925 [2024-11-21 03:29:05.474931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.925 [2024-11-21 03:29:05.474939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.925 [2024-11-21 03:29:05.474946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.925 [2024-11-21 03:29:05.474953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.925 [2024-11-21 03:29:05.474968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.925 [2024-11-21 03:29:05.474974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.925 [2024-11-21 03:29:05.474989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.925 [2024-11-21 03:29:05.474996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.925 [2024-11-21 03:29:05.475018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.925 [2024-11-21 03:29:05.475040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.925 [2024-11-21 03:29:05.475062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.925 [2024-11-21 03:29:05.475083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.925 [2024-11-21 03:29:05.475095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.925 [2024-11-21 03:29:05.475102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.925 [2024-11-21 03:29:05.475109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.925 [2024-11-21 03:29:05.475116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.925 [2024-11-21 03:29:05.475126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.925 [2024-11-21 03:29:05.475132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.925 [2024-11-21 03:29:05.475145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.925 [2024-11-21 03:29:05.475153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.925 [2024-11-21 03:29:05.475168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.925 [2024-11-21 03:29:05.475176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.925 [2024-11-21 03:29:05.475192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.925 [2024-11-21 03:29:05.475199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.925 [2024-11-21 03:29:05.475206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.925 [2024-11-21 03:29:05.475213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.925 [2024-11-21 03:29:05.475219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.925 [2024-11-21 03:29:05.475227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.925 [2024-11-21 03:29:05.475236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.925 [2024-11-21 03:29:05.475256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.925 [2024-11-21 03:29:05.475273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.925 [2024-11-21 03:29:05.475280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.925 [2024-11-21 03:29:05.475287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.925 [2024-11-21 03:29:05.475295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.925 [2024-11-21 03:29:05.475302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.925 [2024-11-21 03:29:05.475309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.925 [2024-11-21 03:29:05.475317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.925 [2024-11-21 03:29:05.475324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.925 [2024-11-21 03:29:05.475332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.925 [2024-11-21 03:29:05.475369] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.925 [2024-11-21 03:29:05.475382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.925 [2024-11-21 03:29:05.475390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.926 [2024-11-21 03:29:05.475398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.926 [2024-11-21 03:29:05.475406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.926 [2024-11-21 03:29:05.475413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.926 [2024-11-21 03:29:05.475421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.926 [2024-11-21 03:29:05.475430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.926 [2024-11-21 03:29:05.475437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:20:17.926 [2024-11-21 03:29:05.475445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.489158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.489203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.187 [2024-11-21 03:29:05.489215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.660 ms 00:20:18.187 [2024-11-21 03:29:05.489224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.489354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.489375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.187 [2024-11-21 03:29:05.489384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:18.187 [2024-11-21 03:29:05.489391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.516457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.516561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.187 [2024-11-21 03:29:05.516597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:20:18.187 [2024-11-21 03:29:05.516629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.516827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.516862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.187 [2024-11-21 03:29:05.516951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:18.187 [2024-11-21 03:29:05.516975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.517661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.517732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.187 [2024-11-21 03:29:05.517771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:20:18.187 [2024-11-21 03:29:05.517791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.518204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.518248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.187 [2024-11-21 03:29:05.518271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:20:18.187 [2024-11-21 03:29:05.518290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.526574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.526627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.187 [2024-11-21 03:29:05.526637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.231 ms 00:20:18.187 [2024-11-21 03:29:05.526645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.530685] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:18.187 [2024-11-21 03:29:05.530736] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.187 [2024-11-21 03:29:05.530748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.530756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.187 [2024-11-21 03:29:05.530765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:20:18.187 [2024-11-21 03:29:05.530772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.546696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.546742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.187 [2024-11-21 03:29:05.546754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.850 ms 00:20:18.187 [2024-11-21 03:29:05.546762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.549827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.550026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.187 [2024-11-21 03:29:05.550045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:20:18.187 [2024-11-21 03:29:05.550053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.552608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.552653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.187 [2024-11-21 03:29:05.552663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.434 ms 00:20:18.187 [2024-11-21 03:29:05.552671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.553027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.553041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.187 [2024-11-21 03:29:05.553050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:18.187 [2024-11-21 03:29:05.553063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.578636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.187 [2024-11-21 03:29:05.578692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.187 [2024-11-21 03:29:05.578705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.548 ms 00:20:18.187 [2024-11-21 03:29:05.578718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.187 [2024-11-21 03:29:05.586895] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.187 [2024-11-21 03:29:05.605627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.605678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.188 [2024-11-21 03:29:05.605691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.822 ms 00:20:18.188 [2024-11-21 03:29:05.605699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.605785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.605797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.188 [2024-11-21 03:29:05.605816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:18.188 [2024-11-21 03:29:05.605825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.605886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.605934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.188 [2024-11-21 03:29:05.605950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:18.188 [2024-11-21 03:29:05.605958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.605985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.606007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.188 [2024-11-21 03:29:05.606016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.188 [2024-11-21 03:29:05.606027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.606061] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.188 [2024-11-21 03:29:05.606072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.606080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.188 [2024-11-21 03:29:05.606090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:18.188 [2024-11-21 03:29:05.606097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.611972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.612146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.188 [2024-11-21 03:29:05.612165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.850 ms 00:20:18.188 [2024-11-21 03:29:05.612174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.612270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.188 [2024-11-21 03:29:05.612285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.188 [2024-11-21 03:29:05.612295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:18.188 [2024-11-21 03:29:05.612302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.188 [2024-11-21 03:29:05.613314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.188 [2024-11-21 03:29:05.614687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.736 ms, result 0 00:20:18.188 [2024-11-21 03:29:05.616009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.188 [2024-11-21 03:29:05.623305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.130  [2024-11-21T03:29:08.082Z] Copying: 22/256 [MB] (22 MBps) [2024-11-21T03:29:09.025Z] Copying: 47/256 [MB] (24 MBps) [2024-11-21T03:29:09.969Z] Copying: 67/256 [MB] (20 MBps) [2024-11-21T03:29:10.914Z] Copying: 82/256 [MB] (14 MBps) [2024-11-21T03:29:11.858Z] Copying: 99/256 [MB] (17 MBps) [2024-11-21T03:29:12.799Z] Copying: 111/256 [MB] (11 MBps) [2024-11-21T03:29:13.789Z] Copying: 124/256 [MB] (13 MBps) [2024-11-21T03:29:14.755Z] Copying: 135/256 [MB] (11 MBps) [2024-11-21T03:29:15.699Z] Copying: 146/256 [MB] (11 MBps) [2024-11-21T03:29:17.085Z] Copying: 170/256 [MB] (24 MBps) [2024-11-21T03:29:18.031Z] Copying: 181/256 [MB] (10 MBps) [2024-11-21T03:29:18.975Z] Copying: 201/256 [MB] (20 MBps) [2024-11-21T03:29:19.919Z] Copying: 218/256 [MB] (17 MBps) [2024-11-21T03:29:20.865Z] Copying: 237/256 [MB] (18 MBps) [2024-11-21T03:29:20.866Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-21 03:29:20.771446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.301 [2024-11-21 03:29:20.774012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.774069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.301 [2024-11-21 03:29:20.774087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:33.301 [2024-11-21 03:29:20.774102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.774129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.301 [2024-11-21 03:29:20.775091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.775132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.301 [2024-11-21 03:29:20.775147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:20:33.301 [2024-11-21 03:29:20.775158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.775457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.775480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.301 [2024-11-21 03:29:20.775501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:20:33.301 [2024-11-21 03:29:20.775514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.779543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.779571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.301 [2024-11-21 03:29:20.779584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:20:33.301 [2024-11-21 03:29:20.779593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.786937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.787625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.301 [2024-11-21 03:29:20.787708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.318 ms 00:20:33.301 [2024-11-21 03:29:20.787761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.792205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.792287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.301 [2024-11-21 03:29:20.792327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.149 ms 00:20:33.301 [2024-11-21 03:29:20.792342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.798485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.798788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.301 [2024-11-21 03:29:20.798824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:20:33.301 [2024-11-21 03:29:20.798840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.799163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.799189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.301 [2024-11-21 03:29:20.799208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:20:33.301 [2024-11-21 03:29:20.799235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.802921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.802975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.301 [2024-11-21 03:29:20.802986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.647 ms 00:20:33.301 [2024-11-21 03:29:20.802994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.805716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.805915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.301 [2024-11-21 03:29:20.805935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.671 ms 00:20:33.301 [2024-11-21 03:29:20.805943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.808700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.808756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.301 [2024-11-21 03:29:20.808768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:20:33.301 [2024-11-21 03:29:20.808775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.811137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.301 [2024-11-21 03:29:20.811189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.301 [2024-11-21 03:29:20.811199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:20:33.301 [2024-11-21 03:29:20.811207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.301 [2024-11-21 03:29:20.811252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.301 [2024-11-21 03:29:20.811269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.301 [2024-11-21 03:29:20.811623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.811997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.302 [2024-11-21 03:29:20.812115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.302 [2024-11-21 03:29:20.812139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1f45e4-d3ad-4cfe-a5aa-597dbbda17c1 00:20:33.302 [2024-11-21 03:29:20.812148] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.302 [2024-11-21 03:29:20.812156] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.302 [2024-11-21 03:29:20.812164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.302 [2024-11-21 03:29:20.812178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.302 [2024-11-21 03:29:20.812186] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.302 [2024-11-21 03:29:20.812194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.302 [2024-11-21 03:29:20.812206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.302 [2024-11-21 03:29:20.812213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.302 [2024-11-21 03:29:20.812219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.302 [2024-11-21 03:29:20.812227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.302 [2024-11-21 03:29:20.812236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.302 [2024-11-21 03:29:20.812259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:20:33.302 [2024-11-21 03:29:20.812268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.814875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.303 [2024-11-21 03:29:20.814939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.303 [2024-11-21 03:29:20.814950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:20:33.303 [2024-11-21 03:29:20.814959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.815095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.303 [2024-11-21 03:29:20.815110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.303 [2024-11-21 03:29:20.815119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:33.303 [2024-11-21 03:29:20.815127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.823630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.823827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.303 [2024-11-21 03:29:20.823847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.823865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.823988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.823999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.303 [2024-11-21 03:29:20.824008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.824016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.824068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.824078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.303 [2024-11-21 03:29:20.824092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.824100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.824125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.824134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.303 [2024-11-21 03:29:20.824143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.824150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.838131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.838182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.303 [2024-11-21 03:29:20.838194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.838202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.303 [2024-11-21 03:29:20.848530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.303 [2024-11-21 03:29:20.848615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.303 [2024-11-21 03:29:20.848674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.303 [2024-11-21 03:29:20.848776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.303 [2024-11-21 03:29:20.848837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.848923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.848933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.303 [2024-11-21 03:29:20.848942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.848950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.849001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.303 [2024-11-21 03:29:20.849014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.303 [2024-11-21 03:29:20.849023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.303 [2024-11-21 03:29:20.849032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.303 [2024-11-21 03:29:20.849185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.179 ms, result 0 00:20:33.564 00:20:33.564 00:20:33.564 03:29:21 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:34.137 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:34.137 03:29:21 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:34.399 Process with pid 89586 is not found 00:20:34.399 03:29:21 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89586 00:20:34.399 03:29:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89586 ']' 00:20:34.399 03:29:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89586 00:20:34.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89586) - No such process 00:20:34.399 03:29:21 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89586 is not found' 00:20:34.399 ************************************ 00:20:34.399 END TEST ftl_trim 00:20:34.399 ************************************ 00:20:34.399 00:20:34.399 real 1m6.750s 00:20:34.399 user 1m26.300s 00:20:34.399 sys 0m5.164s 00:20:34.399 03:29:21 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.399 03:29:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:34.399 03:29:21 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:34.399 03:29:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:34.399 03:29:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.399 03:29:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:34.399 ************************************ 00:20:34.399 START TEST ftl_restore 00:20:34.399 ************************************ 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:34.399 * Looking for test storage... 00:20:34.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.399 03:29:21 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.399 --rc genhtml_branch_coverage=1 00:20:34.399 --rc genhtml_function_coverage=1 00:20:34.399 --rc genhtml_legend=1 00:20:34.399 --rc geninfo_all_blocks=1 00:20:34.399 --rc geninfo_unexecuted_blocks=1 00:20:34.399 00:20:34.399 ' 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.399 --rc genhtml_branch_coverage=1 00:20:34.399 --rc genhtml_function_coverage=1 00:20:34.399 --rc genhtml_legend=1 00:20:34.399 --rc geninfo_all_blocks=1 00:20:34.399 --rc geninfo_unexecuted_blocks=1 00:20:34.399 00:20:34.399 ' 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.399 --rc genhtml_branch_coverage=1 00:20:34.399 --rc genhtml_function_coverage=1 00:20:34.399 --rc genhtml_legend=1 00:20:34.399 --rc geninfo_all_blocks=1 00:20:34.399 --rc geninfo_unexecuted_blocks=1 00:20:34.399 00:20:34.399 ' 00:20:34.399 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.399 --rc genhtml_branch_coverage=1 00:20:34.399 --rc genhtml_function_coverage=1 00:20:34.399 --rc genhtml_legend=1 00:20:34.399 --rc geninfo_all_blocks=1 00:20:34.399 --rc geninfo_unexecuted_blocks=1 00:20:34.399 00:20:34.399 ' 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:34.399 03:29:21 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.400 03:29:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4eL5lkHvOC 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89858 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89858 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 89858 ']' 00:20:34.661 03:29:21 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.661 03:29:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:34.661 [2024-11-21 03:29:22.056037] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:20:34.661 [2024-11-21 03:29:22.056427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89858 ] 00:20:34.661 [2024-11-21 03:29:22.194338] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:34.661 [2024-11-21 03:29:22.223838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.922 [2024-11-21 03:29:22.259855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.494 03:29:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.494 03:29:22 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:35.494 03:29:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:35.755 03:29:23 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:35.755 03:29:23 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:35.755 03:29:23 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:35.755 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:35.755 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:35.755 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:35.755 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:35.755 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:36.015 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.015 { 00:20:36.015 "name": "nvme0n1", 00:20:36.015 "aliases": [ 00:20:36.015 "dd2b0a73-dd04-4091-9cbf-97db12314eda" 00:20:36.015 ], 00:20:36.015 "product_name": "NVMe disk", 00:20:36.015 "block_size": 4096, 00:20:36.016 "num_blocks": 1310720, 00:20:36.016 "uuid": "dd2b0a73-dd04-4091-9cbf-97db12314eda", 00:20:36.016 "numa_id": -1, 00:20:36.016 "assigned_rate_limits": { 00:20:36.016 "rw_ios_per_sec": 0, 00:20:36.016 "rw_mbytes_per_sec": 0, 00:20:36.016 "r_mbytes_per_sec": 0, 00:20:36.016 "w_mbytes_per_sec": 0 00:20:36.016 }, 00:20:36.016 "claimed": true, 00:20:36.016 "claim_type": "read_many_write_one", 00:20:36.016 "zoned": false, 00:20:36.016 "supported_io_types": { 00:20:36.016 "read": true, 00:20:36.016 "write": true, 00:20:36.016 "unmap": true, 00:20:36.016 "flush": true, 00:20:36.016 "reset": true, 00:20:36.016 "nvme_admin": true, 00:20:36.016 "nvme_io": true, 00:20:36.016 "nvme_io_md": false, 00:20:36.016 "write_zeroes": true, 00:20:36.016 "zcopy": false, 00:20:36.016 "get_zone_info": false, 00:20:36.016 "zone_management": false, 00:20:36.016 "zone_append": false, 00:20:36.016 "compare": true, 00:20:36.016 "compare_and_write": false, 00:20:36.016 "abort": true, 00:20:36.016 "seek_hole": false, 00:20:36.016 "seek_data": false, 00:20:36.016 "copy": true, 00:20:36.016 "nvme_iov_md": false 00:20:36.016 }, 00:20:36.016 "driver_specific": { 00:20:36.016 "nvme": [ 00:20:36.016 { 00:20:36.016 "pci_address": "0000:00:11.0", 00:20:36.016 "trid": { 00:20:36.016 "trtype": "PCIe", 00:20:36.016 "traddr": "0000:00:11.0" 00:20:36.016 }, 00:20:36.016 "ctrlr_data": { 00:20:36.016 "cntlid": 0, 00:20:36.016 "vendor_id": "0x1b36", 00:20:36.016 "model_number": "QEMU NVMe Ctrl", 00:20:36.016 "serial_number": "12341", 00:20:36.016 "firmware_revision": "8.0.0", 00:20:36.016 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:36.016 "oacs": { 00:20:36.016 "security": 0, 00:20:36.016 "format": 1, 00:20:36.016 "firmware": 0, 00:20:36.016 "ns_manage": 1 00:20:36.016 }, 00:20:36.016 "multi_ctrlr": false, 00:20:36.016 "ana_reporting": false 00:20:36.016 }, 00:20:36.016 "vs": { 00:20:36.016 "nvme_version": "1.4" 00:20:36.016 }, 00:20:36.016 "ns_data": { 00:20:36.016 "id": 1, 00:20:36.016 "can_share": false 00:20:36.016 } 00:20:36.016 } 00:20:36.016 ], 00:20:36.016 "mp_policy": "active_passive" 00:20:36.016 } 00:20:36.016 } 00:20:36.016 ]' 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:36.016 03:29:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:36.016 03:29:23 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:36.016 03:29:23 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:36.016 03:29:23 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:36.016 03:29:23 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:36.016 03:29:23 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:36.277 03:29:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b87a609d-f524-4abb-8cb3-b7839adcc2a4 00:20:36.277 03:29:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:36.277 03:29:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b87a609d-f524-4abb-8cb3-b7839adcc2a4 00:20:36.537 03:29:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:36.798 03:29:24 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=08d646c2-03a1-40fd-92f5-58fd1437bd38 00:20:36.798 03:29:24 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08d646c2-03a1-40fd-92f5-58fd1437bd38 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:37.058 03:29:24 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.059 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.059 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.059 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.059 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.059 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:37.319 { 00:20:37.319 "name": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:37.319 "aliases": [ 00:20:37.319 "lvs/nvme0n1p0" 00:20:37.319 ], 00:20:37.319 "product_name": "Logical Volume", 00:20:37.319 "block_size": 4096, 00:20:37.319 "num_blocks": 26476544, 00:20:37.319 "uuid": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:37.319 "assigned_rate_limits": { 00:20:37.319 "rw_ios_per_sec": 0, 00:20:37.319 "rw_mbytes_per_sec": 0, 00:20:37.319 "r_mbytes_per_sec": 0, 00:20:37.319 "w_mbytes_per_sec": 0 00:20:37.319 }, 00:20:37.319 "claimed": false, 00:20:37.319 "zoned": false, 00:20:37.319 "supported_io_types": { 00:20:37.319 "read": true, 00:20:37.319 "write": true, 00:20:37.319 "unmap": true, 00:20:37.319 "flush": false, 00:20:37.319 "reset": true, 00:20:37.319 "nvme_admin": false, 00:20:37.319 "nvme_io": false, 00:20:37.319 "nvme_io_md": false, 00:20:37.319 "write_zeroes": true, 00:20:37.319 "zcopy": false, 00:20:37.319 "get_zone_info": false, 00:20:37.319 "zone_management": false, 00:20:37.319 "zone_append": false, 00:20:37.319 "compare": false, 00:20:37.319 "compare_and_write": false, 00:20:37.319 "abort": false, 00:20:37.319 "seek_hole": true, 00:20:37.319 "seek_data": true, 00:20:37.319 "copy": false, 00:20:37.319 "nvme_iov_md": false 00:20:37.319 }, 00:20:37.319 "driver_specific": { 00:20:37.319 "lvol": { 00:20:37.319 "lvol_store_uuid": "08d646c2-03a1-40fd-92f5-58fd1437bd38", 00:20:37.319 "base_bdev": "nvme0n1", 00:20:37.319 "thin_provision": true, 00:20:37.319 "num_allocated_clusters": 0, 00:20:37.319 "snapshot": false, 00:20:37.319 "clone": false, 00:20:37.319 "esnap_clone": false 00:20:37.319 } 00:20:37.319 } 00:20:37.319 } 00:20:37.319 ]' 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:37.319 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:37.319 03:29:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:37.319 03:29:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:37.319 03:29:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:37.579 03:29:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:37.579 03:29:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:37.579 03:29:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.579 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.579 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.579 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.579 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.579 03:29:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.579 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:37.579 { 00:20:37.579 "name": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:37.579 "aliases": [ 00:20:37.579 "lvs/nvme0n1p0" 00:20:37.579 ], 00:20:37.579 "product_name": "Logical Volume", 00:20:37.579 "block_size": 4096, 00:20:37.579 "num_blocks": 26476544, 00:20:37.579 "uuid": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:37.579 "assigned_rate_limits": { 00:20:37.579 "rw_ios_per_sec": 0, 00:20:37.579 "rw_mbytes_per_sec": 0, 00:20:37.579 "r_mbytes_per_sec": 0, 00:20:37.579 "w_mbytes_per_sec": 0 00:20:37.579 }, 00:20:37.579 "claimed": false, 00:20:37.579 "zoned": false, 00:20:37.579 "supported_io_types": { 00:20:37.579 "read": true, 00:20:37.579 "write": true, 00:20:37.579 "unmap": true, 00:20:37.579 "flush": false, 00:20:37.579 "reset": true, 00:20:37.579 "nvme_admin": false, 00:20:37.579 "nvme_io": false, 00:20:37.579 "nvme_io_md": false, 00:20:37.579 "write_zeroes": true, 00:20:37.579 "zcopy": false, 00:20:37.579 "get_zone_info": false, 00:20:37.579 "zone_management": false, 00:20:37.579 "zone_append": false, 00:20:37.579 "compare": false, 00:20:37.579 "compare_and_write": false, 00:20:37.579 "abort": false, 00:20:37.579 "seek_hole": true, 00:20:37.579 "seek_data": true, 00:20:37.579 "copy": false, 00:20:37.579 "nvme_iov_md": false 00:20:37.579 }, 00:20:37.579 "driver_specific": { 00:20:37.579 "lvol": { 00:20:37.579 "lvol_store_uuid": "08d646c2-03a1-40fd-92f5-58fd1437bd38", 00:20:37.579 "base_bdev": "nvme0n1", 00:20:37.579 "thin_provision": true, 00:20:37.579 "num_allocated_clusters": 0, 00:20:37.579 "snapshot": false, 00:20:37.579 "clone": false, 00:20:37.579 "esnap_clone": false 00:20:37.579 } 00:20:37.579 } 00:20:37.579 } 00:20:37.579 ]' 00:20:37.579 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:37.839 03:29:25 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:37.839 03:29:25 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:37.839 03:29:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:37.839 03:29:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0104a167-69dd-487a-8d0d-d3515768fae8 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.839 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0104a167-69dd-487a-8d0d-d3515768fae8 00:20:38.098 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:38.098 { 00:20:38.098 "name": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:38.098 "aliases": [ 00:20:38.099 "lvs/nvme0n1p0" 00:20:38.099 ], 00:20:38.099 "product_name": "Logical Volume", 00:20:38.099 "block_size": 4096, 00:20:38.099 "num_blocks": 26476544, 00:20:38.099 "uuid": "0104a167-69dd-487a-8d0d-d3515768fae8", 00:20:38.099 "assigned_rate_limits": { 00:20:38.099 "rw_ios_per_sec": 0, 00:20:38.099 "rw_mbytes_per_sec": 0, 00:20:38.099 "r_mbytes_per_sec": 0, 00:20:38.099 "w_mbytes_per_sec": 0 00:20:38.099 }, 00:20:38.099 "claimed": false, 00:20:38.099 "zoned": false, 00:20:38.099 "supported_io_types": { 00:20:38.099 "read": true, 00:20:38.099 "write": true, 00:20:38.099 "unmap": true, 00:20:38.099 "flush": false, 00:20:38.099 "reset": true, 00:20:38.099 "nvme_admin": false, 00:20:38.099 "nvme_io": false, 00:20:38.099 "nvme_io_md": false, 00:20:38.099 "write_zeroes": true, 00:20:38.099 "zcopy": false, 00:20:38.099 "get_zone_info": false, 00:20:38.099 "zone_management": false, 00:20:38.099 "zone_append": false, 00:20:38.099 "compare": false, 00:20:38.099 "compare_and_write": false, 00:20:38.099 "abort": false, 00:20:38.099 "seek_hole": true, 00:20:38.099 "seek_data": true, 00:20:38.099 "copy": false, 00:20:38.099 "nvme_iov_md": false 00:20:38.099 }, 00:20:38.099 "driver_specific": { 00:20:38.099 "lvol": { 00:20:38.099 "lvol_store_uuid": "08d646c2-03a1-40fd-92f5-58fd1437bd38", 00:20:38.099 "base_bdev": "nvme0n1", 00:20:38.099 "thin_provision": true, 00:20:38.099 "num_allocated_clusters": 0, 00:20:38.099 "snapshot": false, 00:20:38.099 "clone": false, 00:20:38.099 "esnap_clone": false 00:20:38.099 } 00:20:38.099 } 00:20:38.099 } 00:20:38.099 ]' 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:38.099 03:29:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0104a167-69dd-487a-8d0d-d3515768fae8 --l2p_dram_limit 10' 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:38.099 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:38.099 03:29:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0104a167-69dd-487a-8d0d-d3515768fae8 --l2p_dram_limit 10 -c nvc0n1p0 00:20:38.359 [2024-11-21 03:29:25.856967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.857019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.359 [2024-11-21 03:29:25.857035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:38.359 [2024-11-21 03:29:25.857044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.857106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.857117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.359 [2024-11-21 03:29:25.857134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:38.359 [2024-11-21 03:29:25.857146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.857173] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.359 [2024-11-21 03:29:25.857455] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.359 [2024-11-21 03:29:25.857492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.857500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.359 [2024-11-21 03:29:25.857510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:20:38.359 [2024-11-21 03:29:25.857519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.857664] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:20:38.359 [2024-11-21 03:29:25.858951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.858990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:38.359 [2024-11-21 03:29:25.859001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:38.359 [2024-11-21 03:29:25.859010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.865178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.865213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.359 [2024-11-21 03:29:25.865222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:20:38.359 [2024-11-21 03:29:25.865234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.865324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.865335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.359 [2024-11-21 03:29:25.865348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:38.359 [2024-11-21 03:29:25.865357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.865430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.865442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.359 [2024-11-21 03:29:25.865450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:38.359 [2024-11-21 03:29:25.865460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.865481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.359 [2024-11-21 03:29:25.867226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.867256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.359 [2024-11-21 03:29:25.867268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:20:38.359 [2024-11-21 03:29:25.867276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.867308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.867317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.359 [2024-11-21 03:29:25.867328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:38.359 [2024-11-21 03:29:25.867336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.867354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:38.359 [2024-11-21 03:29:25.867495] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:38.359 [2024-11-21 03:29:25.867509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.359 [2024-11-21 03:29:25.867519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:38.359 [2024-11-21 03:29:25.867531] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.359 [2024-11-21 03:29:25.867542] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.359 [2024-11-21 03:29:25.867557] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:38.359 [2024-11-21 03:29:25.867569] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.359 [2024-11-21 03:29:25.867578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:38.359 [2024-11-21 03:29:25.867585] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:38.359 [2024-11-21 03:29:25.867594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.867601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.359 [2024-11-21 03:29:25.867610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:20:38.359 [2024-11-21 03:29:25.867617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.867703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.359 [2024-11-21 03:29:25.867711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.359 [2024-11-21 03:29:25.867720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:38.359 [2024-11-21 03:29:25.867734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.359 [2024-11-21 03:29:25.867829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.359 [2024-11-21 03:29:25.867839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.359 [2024-11-21 03:29:25.867849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.359 [2024-11-21 03:29:25.867857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.359 [2024-11-21 03:29:25.867866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.359 [2024-11-21 03:29:25.867873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.359 [2024-11-21 03:29:25.867882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:38.359 [2024-11-21 03:29:25.867890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.359 [2024-11-21 03:29:25.867911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:38.359 [2024-11-21 03:29:25.867918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.359 [2024-11-21 03:29:25.867928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.359 [2024-11-21 03:29:25.867934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:38.359 [2024-11-21 03:29:25.867945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.359 [2024-11-21 03:29:25.867952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.359 [2024-11-21 03:29:25.867960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:38.359 [2024-11-21 03:29:25.867967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.359 [2024-11-21 03:29:25.867975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.359 [2024-11-21 03:29:25.867982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:38.359 [2024-11-21 03:29:25.867990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.359 [2024-11-21 03:29:25.867997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.359 [2024-11-21 03:29:25.868005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:38.359 [2024-11-21 03:29:25.868012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.360 [2024-11-21 03:29:25.868026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.360 [2024-11-21 03:29:25.868049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.360 [2024-11-21 03:29:25.868073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.360 [2024-11-21 03:29:25.868095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.360 [2024-11-21 03:29:25.868110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.360 [2024-11-21 03:29:25.868116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:38.360 [2024-11-21 03:29:25.868126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.360 [2024-11-21 03:29:25.868132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:38.360 [2024-11-21 03:29:25.868141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:38.360 [2024-11-21 03:29:25.868147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:38.360 [2024-11-21 03:29:25.868163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:38.360 [2024-11-21 03:29:25.868170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.360 [2024-11-21 03:29:25.868188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.360 [2024-11-21 03:29:25.868195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.360 [2024-11-21 03:29:25.868212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.360 [2024-11-21 03:29:25.868221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.360 [2024-11-21 03:29:25.868227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.360 [2024-11-21 03:29:25.868235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.360 [2024-11-21 03:29:25.868242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.360 [2024-11-21 03:29:25.868250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.360 [2024-11-21 03:29:25.868260] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.360 [2024-11-21 03:29:25.868273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:38.360 [2024-11-21 03:29:25.868290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:38.360 [2024-11-21 03:29:25.868297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:38.360 [2024-11-21 03:29:25.868306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:38.360 [2024-11-21 03:29:25.868313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:38.360 [2024-11-21 03:29:25.868324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:38.360 [2024-11-21 03:29:25.868331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:38.360 [2024-11-21 03:29:25.868339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:38.360 [2024-11-21 03:29:25.868346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:38.360 [2024-11-21 03:29:25.868354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:38.360 [2024-11-21 03:29:25.868392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.360 [2024-11-21 03:29:25.868402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.360 [2024-11-21 03:29:25.868418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.360 [2024-11-21 03:29:25.868425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.360 [2024-11-21 03:29:25.868434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.360 [2024-11-21 03:29:25.868442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.360 [2024-11-21 03:29:25.868452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.360 [2024-11-21 03:29:25.868460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:20:38.360 [2024-11-21 03:29:25.868469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.360 [2024-11-21 03:29:25.868506] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:38.360 [2024-11-21 03:29:25.868517] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:42.569 [2024-11-21 03:29:29.775188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.569 [2024-11-21 03:29:29.775244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:42.570 [2024-11-21 03:29:29.775256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3906.667 ms 00:20:42.570 [2024-11-21 03:29:29.775264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.782717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.782754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.570 [2024-11-21 03:29:29.782763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.387 ms 00:20:42.570 [2024-11-21 03:29:29.782773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.782841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.782850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.570 [2024-11-21 03:29:29.782856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:42.570 [2024-11-21 03:29:29.782864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.790204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.790240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.570 [2024-11-21 03:29:29.790249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.307 ms 00:20:42.570 [2024-11-21 03:29:29.790258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.790279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.790291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.570 [2024-11-21 03:29:29.790297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:42.570 [2024-11-21 03:29:29.790304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.790583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.790606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.570 [2024-11-21 03:29:29.790614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:42.570 [2024-11-21 03:29:29.790624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.790707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.790716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.570 [2024-11-21 03:29:29.790727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:42.570 [2024-11-21 03:29:29.790737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.795643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.795676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.570 [2024-11-21 03:29:29.795683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:20:42.570 [2024-11-21 03:29:29.795690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.802241] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.570 [2024-11-21 03:29:29.804483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.804507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.570 [2024-11-21 03:29:29.804517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.731 ms 00:20:42.570 [2024-11-21 03:29:29.804523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.872841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.872877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:42.570 [2024-11-21 03:29:29.872893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.296 ms 00:20:42.570 [2024-11-21 03:29:29.872909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.873047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.873056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.570 [2024-11-21 03:29:29.873064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:42.570 [2024-11-21 03:29:29.873069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.875827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.875857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:42.570 [2024-11-21 03:29:29.875869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:20:42.570 [2024-11-21 03:29:29.875875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.877878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.877916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:42.570 [2024-11-21 03:29:29.877927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:20:42.570 [2024-11-21 03:29:29.877933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.878177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.878192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.570 [2024-11-21 03:29:29.878203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:42.570 [2024-11-21 03:29:29.878209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.906232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.906262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:42.570 [2024-11-21 03:29:29.906274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.005 ms 00:20:42.570 [2024-11-21 03:29:29.906280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.909814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.909843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:42.570 [2024-11-21 03:29:29.909853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.507 ms 00:20:42.570 [2024-11-21 03:29:29.909858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.912350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.912377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:42.570 [2024-11-21 03:29:29.912385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:20:42.570 [2024-11-21 03:29:29.912390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.915373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.915401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.570 [2024-11-21 03:29:29.915411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:20:42.570 [2024-11-21 03:29:29.915417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.915437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.915444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.570 [2024-11-21 03:29:29.915452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.570 [2024-11-21 03:29:29.915457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.915506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.570 [2024-11-21 03:29:29.915512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.570 [2024-11-21 03:29:29.915525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:42.570 [2024-11-21 03:29:29.915532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.570 [2024-11-21 03:29:29.916196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4058.932 ms, result 0 00:20:42.570 { 00:20:42.570 "name": "ftl0", 00:20:42.570 "uuid": "c25190a4-7a93-467f-bf6a-a9d74520ea9c" 00:20:42.570 } 00:20:42.570 03:29:29 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:42.570 03:29:29 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:42.833 03:29:30 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:42.833 03:29:30 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:42.833 [2024-11-21 03:29:30.320726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.320765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.833 [2024-11-21 03:29:30.320780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:42.833 [2024-11-21 03:29:30.320787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.320804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.833 [2024-11-21 03:29:30.321222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.321241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.833 [2024-11-21 03:29:30.321250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:20:42.833 [2024-11-21 03:29:30.321256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.321452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.321486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.833 [2024-11-21 03:29:30.321494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:20:42.833 [2024-11-21 03:29:30.321501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.323926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.323945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.833 [2024-11-21 03:29:30.323953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:20:42.833 [2024-11-21 03:29:30.323960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.328747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.328774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.833 [2024-11-21 03:29:30.328783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:20:42.833 [2024-11-21 03:29:30.328791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.330060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.330085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.833 [2024-11-21 03:29:30.330094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:20:42.833 [2024-11-21 03:29:30.330099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.334215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.334243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.833 [2024-11-21 03:29:30.334253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.086 ms 00:20:42.833 [2024-11-21 03:29:30.334259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.334351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.334359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.833 [2024-11-21 03:29:30.334369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:42.833 [2024-11-21 03:29:30.334374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.335844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.335870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.833 [2024-11-21 03:29:30.335878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:20:42.833 [2024-11-21 03:29:30.335883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.337050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.337072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.833 [2024-11-21 03:29:30.337080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:20:42.833 [2024-11-21 03:29:30.337086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.338093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.338119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.833 [2024-11-21 03:29:30.338127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:20:42.833 [2024-11-21 03:29:30.338132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.339222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.833 [2024-11-21 03:29:30.339247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.833 [2024-11-21 03:29:30.339255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:20:42.833 [2024-11-21 03:29:30.339260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.833 [2024-11-21 03:29:30.339299] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.833 [2024-11-21 03:29:30.339311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.833 [2024-11-21 03:29:30.339523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.339996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.834 [2024-11-21 03:29:30.340008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.834 [2024-11-21 03:29:30.340016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:20:42.834 [2024-11-21 03:29:30.340023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.834 [2024-11-21 03:29:30.340030] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.834 [2024-11-21 03:29:30.340035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.834 [2024-11-21 03:29:30.340042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.834 [2024-11-21 03:29:30.340048] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.834 [2024-11-21 03:29:30.340057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.834 [2024-11-21 03:29:30.340063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.834 [2024-11-21 03:29:30.340069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.834 [2024-11-21 03:29:30.340074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.834 [2024-11-21 03:29:30.340082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.834 [2024-11-21 03:29:30.340088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.834 [2024-11-21 03:29:30.340096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:20:42.834 [2024-11-21 03:29:30.340102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.834 [2024-11-21 03:29:30.341327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.834 [2024-11-21 03:29:30.341348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.834 [2024-11-21 03:29:30.341359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:20:42.834 [2024-11-21 03:29:30.341366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.834 [2024-11-21 03:29:30.341437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.834 [2024-11-21 03:29:30.341448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.834 [2024-11-21 03:29:30.341457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:42.834 [2024-11-21 03:29:30.341462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.834 [2024-11-21 03:29:30.345944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.834 [2024-11-21 03:29:30.345964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.834 [2024-11-21 03:29:30.345974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.834 [2024-11-21 03:29:30.345980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.346030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.346036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.835 [2024-11-21 03:29:30.346043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.346049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.346090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.346098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.835 [2024-11-21 03:29:30.346107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.346114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.346128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.346135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.835 [2024-11-21 03:29:30.346141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.346147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.353808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.353845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.835 [2024-11-21 03:29:30.353854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.353862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.835 [2024-11-21 03:29:30.360352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.835 [2024-11-21 03:29:30.360428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.835 [2024-11-21 03:29:30.360476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.835 [2024-11-21 03:29:30.360551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.835 [2024-11-21 03:29:30.360596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.835 [2024-11-21 03:29:30.360644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.835 [2024-11-21 03:29:30.360696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.835 [2024-11-21 03:29:30.360703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.835 [2024-11-21 03:29:30.360708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.835 [2024-11-21 03:29:30.360809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.056 ms, result 0 00:20:42.835 true 00:20:42.835 03:29:30 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89858 00:20:42.835 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89858 ']' 00:20:42.835 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89858 00:20:42.835 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:42.835 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.835 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89858 00:20:43.097 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.097 killing process with pid 89858 00:20:43.097 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.097 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89858' 00:20:43.097 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 89858 00:20:43.097 03:29:30 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 89858 00:20:48.387 03:29:35 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:52.595 262144+0 records in 00:20:52.595 262144+0 records out 00:20:52.595 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.48074 s, 240 MB/s 00:20:52.595 03:29:39 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:54.511 03:29:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.511 [2024-11-21 03:29:41.900227] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:20:54.511 [2024-11-21 03:29:41.900353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90083 ] 00:20:54.511 [2024-11-21 03:29:42.032465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:54.511 [2024-11-21 03:29:42.063012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.773 [2024-11-21 03:29:42.084050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.773 [2024-11-21 03:29:42.179282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.773 [2024-11-21 03:29:42.179349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.036 [2024-11-21 03:29:42.338121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.036 [2024-11-21 03:29:42.338182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.036 [2024-11-21 03:29:42.338198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.036 [2024-11-21 03:29:42.338207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.036 [2024-11-21 03:29:42.338263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.036 [2024-11-21 03:29:42.338274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.036 [2024-11-21 03:29:42.338283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:55.036 [2024-11-21 03:29:42.338291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.036 [2024-11-21 03:29:42.338316] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.036 [2024-11-21 03:29:42.339116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.036 [2024-11-21 03:29:42.339168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.036 [2024-11-21 03:29:42.339178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.036 [2024-11-21 03:29:42.339191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:20:55.036 [2024-11-21 03:29:42.339199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.036 [2024-11-21 03:29:42.340929] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.036 [2024-11-21 03:29:42.344829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.036 [2024-11-21 03:29:42.344881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.036 [2024-11-21 03:29:42.344920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.903 ms 00:20:55.036 [2024-11-21 03:29:42.344934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.345017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.345028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.037 [2024-11-21 03:29:42.345037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:55.037 [2024-11-21 03:29:42.345045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.353508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.353559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.037 [2024-11-21 03:29:42.353571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.415 ms 00:20:55.037 [2024-11-21 03:29:42.353589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.353695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.353706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.037 [2024-11-21 03:29:42.353717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:55.037 [2024-11-21 03:29:42.353726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.353786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.353795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.037 [2024-11-21 03:29:42.353803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:55.037 [2024-11-21 03:29:42.353814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.353836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.037 [2024-11-21 03:29:42.355921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.355969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.037 [2024-11-21 03:29:42.355978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:20:55.037 [2024-11-21 03:29:42.355987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.356021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.356031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.037 [2024-11-21 03:29:42.356039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:55.037 [2024-11-21 03:29:42.356053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.356079] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.037 [2024-11-21 03:29:42.356101] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.037 [2024-11-21 03:29:42.356138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.037 [2024-11-21 03:29:42.356154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:55.037 [2024-11-21 03:29:42.356259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.037 [2024-11-21 03:29:42.356271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.037 [2024-11-21 03:29:42.356286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:55.037 [2024-11-21 03:29:42.356296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356306] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:55.037 [2024-11-21 03:29:42.356322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.037 [2024-11-21 03:29:42.356330] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.037 [2024-11-21 03:29:42.356338] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.037 [2024-11-21 03:29:42.356347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.356355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.037 [2024-11-21 03:29:42.356367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:55.037 [2024-11-21 03:29:42.356378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.356463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.037 [2024-11-21 03:29:42.356482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.037 [2024-11-21 03:29:42.356489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:55.037 [2024-11-21 03:29:42.356498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.037 [2024-11-21 03:29:42.356599] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.037 [2024-11-21 03:29:42.356611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.037 [2024-11-21 03:29:42.356621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.037 [2024-11-21 03:29:42.356648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.037 [2024-11-21 03:29:42.356680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.037 [2024-11-21 03:29:42.356698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.037 [2024-11-21 03:29:42.356707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:55.037 [2024-11-21 03:29:42.356715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.037 [2024-11-21 03:29:42.356725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.037 [2024-11-21 03:29:42.356733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:55.037 [2024-11-21 03:29:42.356742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.037 [2024-11-21 03:29:42.356758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.037 [2024-11-21 03:29:42.356782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.037 [2024-11-21 03:29:42.356806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.037 [2024-11-21 03:29:42.356833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.037 [2024-11-21 03:29:42.356857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.037 [2024-11-21 03:29:42.356873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.037 [2024-11-21 03:29:42.356881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:55.037 [2024-11-21 03:29:42.356888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.037 [2024-11-21 03:29:42.356942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.038 [2024-11-21 03:29:42.356950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:55.038 [2024-11-21 03:29:42.356958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.038 [2024-11-21 03:29:42.356966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.038 [2024-11-21 03:29:42.356974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:55.038 [2024-11-21 03:29:42.356981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.038 [2024-11-21 03:29:42.356989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.038 [2024-11-21 03:29:42.357000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:55.038 [2024-11-21 03:29:42.357008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.038 [2024-11-21 03:29:42.357017] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.038 [2024-11-21 03:29:42.357033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.038 [2024-11-21 03:29:42.357041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.038 [2024-11-21 03:29:42.357052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.038 [2024-11-21 03:29:42.357060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.038 [2024-11-21 03:29:42.357067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.038 [2024-11-21 03:29:42.357074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.038 [2024-11-21 03:29:42.357081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.038 [2024-11-21 03:29:42.357088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.038 [2024-11-21 03:29:42.357095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.038 [2024-11-21 03:29:42.357104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.038 [2024-11-21 03:29:42.357117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:55.038 [2024-11-21 03:29:42.357133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:55.038 [2024-11-21 03:29:42.357143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:55.038 [2024-11-21 03:29:42.357151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:55.038 [2024-11-21 03:29:42.357158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:55.038 [2024-11-21 03:29:42.357166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:55.038 [2024-11-21 03:29:42.357174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:55.038 [2024-11-21 03:29:42.357181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:55.038 [2024-11-21 03:29:42.357189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:55.038 [2024-11-21 03:29:42.357196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:55.038 [2024-11-21 03:29:42.357231] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.038 [2024-11-21 03:29:42.357239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.038 [2024-11-21 03:29:42.357255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.038 [2024-11-21 03:29:42.357265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.038 [2024-11-21 03:29:42.357272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.038 [2024-11-21 03:29:42.357281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.357289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.038 [2024-11-21 03:29:42.357297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:20:55.038 [2024-11-21 03:29:42.357308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.371851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.371911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.038 [2024-11-21 03:29:42.371923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:20:55.038 [2024-11-21 03:29:42.371933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.372021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.372032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.038 [2024-11-21 03:29:42.372048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:55.038 [2024-11-21 03:29:42.372061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.393298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.393357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.038 [2024-11-21 03:29:42.393376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.173 ms 00:20:55.038 [2024-11-21 03:29:42.393389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.393456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.393467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.038 [2024-11-21 03:29:42.393483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:55.038 [2024-11-21 03:29:42.393492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.394089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.394128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.038 [2024-11-21 03:29:42.394140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:20:55.038 [2024-11-21 03:29:42.394149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.394304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.394313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.038 [2024-11-21 03:29:42.394325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:55.038 [2024-11-21 03:29:42.394333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.402459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.402504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.038 [2024-11-21 03:29:42.402517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.100 ms 00:20:55.038 [2024-11-21 03:29:42.402536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.406530] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:55.038 [2024-11-21 03:29:42.406591] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.038 [2024-11-21 03:29:42.406603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.406612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.038 [2024-11-21 03:29:42.406622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:20:55.038 [2024-11-21 03:29:42.406629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.038 [2024-11-21 03:29:42.422647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.038 [2024-11-21 03:29:42.422724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.038 [2024-11-21 03:29:42.422736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.962 ms 00:20:55.039 [2024-11-21 03:29:42.422744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.425680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.425730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.039 [2024-11-21 03:29:42.425740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:20:55.039 [2024-11-21 03:29:42.425748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.428522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.428570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.039 [2024-11-21 03:29:42.428581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:20:55.039 [2024-11-21 03:29:42.428599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.428961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.428976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.039 [2024-11-21 03:29:42.428991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:55.039 [2024-11-21 03:29:42.429000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.453941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.454024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.039 [2024-11-21 03:29:42.454037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.919 ms 00:20:55.039 [2024-11-21 03:29:42.454045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.461952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.039 [2024-11-21 03:29:42.464963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.465009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.039 [2024-11-21 03:29:42.465027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.866 ms 00:20:55.039 [2024-11-21 03:29:42.465036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.465105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.465121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.039 [2024-11-21 03:29:42.465130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.039 [2024-11-21 03:29:42.465137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.465203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.465214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.039 [2024-11-21 03:29:42.465222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:55.039 [2024-11-21 03:29:42.465238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.465260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.465269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.039 [2024-11-21 03:29:42.465277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.039 [2024-11-21 03:29:42.465287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.465319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.039 [2024-11-21 03:29:42.465329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.465342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.039 [2024-11-21 03:29:42.465353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.039 [2024-11-21 03:29:42.465361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.470524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.470573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.039 [2024-11-21 03:29:42.470583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:20:55.039 [2024-11-21 03:29:42.470591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.470673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.039 [2024-11-21 03:29:42.470686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.039 [2024-11-21 03:29:42.470699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:55.039 [2024-11-21 03:29:42.470707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.039 [2024-11-21 03:29:42.471776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.231 ms, result 0 00:20:55.984  [2024-11-21T03:29:44.493Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-21T03:29:45.882Z] Copying: 32/1024 [MB] (16 MBps) [2024-11-21T03:29:46.827Z] Copying: 59/1024 [MB] (26 MBps) [2024-11-21T03:29:47.807Z] Copying: 77/1024 [MB] (17 MBps) [2024-11-21T03:29:48.751Z] Copying: 96/1024 [MB] (19 MBps) [2024-11-21T03:29:49.702Z] Copying: 131/1024 [MB] (35 MBps) [2024-11-21T03:29:50.645Z] Copying: 149/1024 [MB] (18 MBps) [2024-11-21T03:29:51.587Z] Copying: 167/1024 [MB] (18 MBps) [2024-11-21T03:29:52.532Z] Copying: 185/1024 [MB] (17 MBps) [2024-11-21T03:29:53.492Z] Copying: 215/1024 [MB] (29 MBps) [2024-11-21T03:29:54.880Z] Copying: 234/1024 [MB] (19 MBps) [2024-11-21T03:29:55.824Z] Copying: 253/1024 [MB] (18 MBps) [2024-11-21T03:29:56.767Z] Copying: 267/1024 [MB] (13 MBps) [2024-11-21T03:29:57.711Z] Copying: 295/1024 [MB] (28 MBps) [2024-11-21T03:29:58.655Z] Copying: 317/1024 [MB] (21 MBps) [2024-11-21T03:29:59.598Z] Copying: 336/1024 [MB] (18 MBps) [2024-11-21T03:30:00.546Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-21T03:30:01.492Z] Copying: 373/1024 [MB] (15 MBps) [2024-11-21T03:30:02.880Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-21T03:30:03.826Z] Copying: 394/1024 [MB] (10 MBps) [2024-11-21T03:30:04.771Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-21T03:30:05.715Z] Copying: 418/1024 [MB] (13 MBps) [2024-11-21T03:30:06.659Z] Copying: 429/1024 [MB] (10 MBps) [2024-11-21T03:30:07.604Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-21T03:30:08.549Z] Copying: 460/1024 [MB] (19 MBps) [2024-11-21T03:30:09.493Z] Copying: 481/1024 [MB] (21 MBps) [2024-11-21T03:30:10.882Z] Copying: 494/1024 [MB] (12 MBps) [2024-11-21T03:30:11.827Z] Copying: 516264/1048576 [kB] (10128 kBps) [2024-11-21T03:30:12.772Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-21T03:30:13.717Z] Copying: 531/1024 [MB] (13 MBps) [2024-11-21T03:30:14.662Z] Copying: 545/1024 [MB] (13 MBps) [2024-11-21T03:30:15.606Z] Copying: 558/1024 [MB] (13 MBps) [2024-11-21T03:30:16.593Z] Copying: 571/1024 [MB] (13 MBps) [2024-11-21T03:30:17.553Z] Copying: 585/1024 [MB] (13 MBps) [2024-11-21T03:30:18.498Z] Copying: 604/1024 [MB] (18 MBps) [2024-11-21T03:30:19.890Z] Copying: 626/1024 [MB] (22 MBps) [2024-11-21T03:30:20.836Z] Copying: 640/1024 [MB] (14 MBps) [2024-11-21T03:30:21.780Z] Copying: 654/1024 [MB] (14 MBps) [2024-11-21T03:30:22.726Z] Copying: 667/1024 [MB] (12 MBps) [2024-11-21T03:30:23.669Z] Copying: 680/1024 [MB] (13 MBps) [2024-11-21T03:30:24.611Z] Copying: 695/1024 [MB] (15 MBps) [2024-11-21T03:30:25.553Z] Copying: 711/1024 [MB] (15 MBps) [2024-11-21T03:30:26.498Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-21T03:30:27.886Z] Copying: 739/1024 [MB] (15 MBps) [2024-11-21T03:30:28.829Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-21T03:30:29.787Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-21T03:30:30.731Z] Copying: 775/1024 [MB] (13 MBps) [2024-11-21T03:30:31.674Z] Copying: 802/1024 [MB] (26 MBps) [2024-11-21T03:30:32.618Z] Copying: 822/1024 [MB] (20 MBps) [2024-11-21T03:30:33.560Z] Copying: 842/1024 [MB] (19 MBps) [2024-11-21T03:30:34.501Z] Copying: 869/1024 [MB] (27 MBps) [2024-11-21T03:30:35.888Z] Copying: 903/1024 [MB] (34 MBps) [2024-11-21T03:30:36.830Z] Copying: 917/1024 [MB] (13 MBps) [2024-11-21T03:30:37.772Z] Copying: 941/1024 [MB] (23 MBps) [2024-11-21T03:30:38.715Z] Copying: 974/1024 [MB] (33 MBps) [2024-11-21T03:30:39.658Z] Copying: 988/1024 [MB] (14 MBps) [2024-11-21T03:30:40.607Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-21T03:30:40.607Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 03:30:40.319452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.319522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:53.042 [2024-11-21 03:30:40.319538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:53.042 [2024-11-21 03:30:40.319547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.319572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:53.042 [2024-11-21 03:30:40.320357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.320385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:53.042 [2024-11-21 03:30:40.320397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:21:53.042 [2024-11-21 03:30:40.320406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.323308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.323349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:53.042 [2024-11-21 03:30:40.323360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:21:53.042 [2024-11-21 03:30:40.323368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.342550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.342590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:53.042 [2024-11-21 03:30:40.342602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.159 ms 00:21:53.042 [2024-11-21 03:30:40.342611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.348819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.348866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:53.042 [2024-11-21 03:30:40.348885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:21:53.042 [2024-11-21 03:30:40.348893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.351147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.351190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:53.042 [2024-11-21 03:30:40.351200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:21:53.042 [2024-11-21 03:30:40.351207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.355745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.355791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:53.042 [2024-11-21 03:30:40.355801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:21:53.042 [2024-11-21 03:30:40.355818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.355952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.355963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:53.042 [2024-11-21 03:30:40.355972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:53.042 [2024-11-21 03:30:40.355985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.358710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.358750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:53.042 [2024-11-21 03:30:40.358759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:21:53.042 [2024-11-21 03:30:40.358767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.360603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.360639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:53.042 [2024-11-21 03:30:40.360649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:21:53.042 [2024-11-21 03:30:40.360656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.362244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.362285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:53.042 [2024-11-21 03:30:40.362294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:21:53.042 [2024-11-21 03:30:40.362301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.363833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.042 [2024-11-21 03:30:40.363873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:53.042 [2024-11-21 03:30:40.363883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:21:53.042 [2024-11-21 03:30:40.363889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.042 [2024-11-21 03:30:40.363949] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:53.042 [2024-11-21 03:30:40.363976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.363987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.363996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:53.042 [2024-11-21 03:30:40.364157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:53.043 [2024-11-21 03:30:40.364746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:53.043 [2024-11-21 03:30:40.364753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:21:53.043 [2024-11-21 03:30:40.364762] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:53.043 [2024-11-21 03:30:40.364769] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:53.043 [2024-11-21 03:30:40.364776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:53.043 [2024-11-21 03:30:40.364785] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:53.043 [2024-11-21 03:30:40.364792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:53.043 [2024-11-21 03:30:40.364800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:53.043 [2024-11-21 03:30:40.364807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:53.043 [2024-11-21 03:30:40.364814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:53.043 [2024-11-21 03:30:40.364820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:53.043 [2024-11-21 03:30:40.364827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.043 [2024-11-21 03:30:40.364834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:53.043 [2024-11-21 03:30:40.364849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:21:53.043 [2024-11-21 03:30:40.364856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.043 [2024-11-21 03:30:40.367106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.043 [2024-11-21 03:30:40.367140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:53.044 [2024-11-21 03:30:40.367152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:21:53.044 [2024-11-21 03:30:40.367161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.367281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.044 [2024-11-21 03:30:40.367297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:53.044 [2024-11-21 03:30:40.367307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:53.044 [2024-11-21 03:30:40.367316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.374540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.374584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.044 [2024-11-21 03:30:40.374594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.374602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.374659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.374671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.044 [2024-11-21 03:30:40.374679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.374687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.374744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.374754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.044 [2024-11-21 03:30:40.374763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.374770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.374794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.374803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.044 [2024-11-21 03:30:40.374813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.374821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.388090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.388137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.044 [2024-11-21 03:30:40.388149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.388158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.044 [2024-11-21 03:30:40.399250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.044 [2024-11-21 03:30:40.399329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.044 [2024-11-21 03:30:40.399392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.044 [2024-11-21 03:30:40.399497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.044 [2024-11-21 03:30:40.399552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.044 [2024-11-21 03:30:40.399632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.044 [2024-11-21 03:30:40.399701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.044 [2024-11-21 03:30:40.399709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.044 [2024-11-21 03:30:40.399720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.044 [2024-11-21 03:30:40.399851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.370 ms, result 0 00:21:53.616 00:21:53.616 00:21:53.616 03:30:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:53.616 [2024-11-21 03:30:40.989926] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:21:53.616 [2024-11-21 03:30:40.990103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90699 ] 00:21:53.616 [2024-11-21 03:30:41.126178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:53.616 [2024-11-21 03:30:41.157124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.877 [2024-11-21 03:30:41.185172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.877 [2024-11-21 03:30:41.299437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:53.877 [2024-11-21 03:30:41.299515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.141 [2024-11-21 03:30:41.459994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.141 [2024-11-21 03:30:41.460050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:54.141 [2024-11-21 03:30:41.460066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:54.141 [2024-11-21 03:30:41.460075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.141 [2024-11-21 03:30:41.460131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.141 [2024-11-21 03:30:41.460146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.141 [2024-11-21 03:30:41.460155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:54.141 [2024-11-21 03:30:41.460163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.141 [2024-11-21 03:30:41.460185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:54.141 [2024-11-21 03:30:41.460574] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:54.141 [2024-11-21 03:30:41.460613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.141 [2024-11-21 03:30:41.460621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.141 [2024-11-21 03:30:41.460635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:21:54.141 [2024-11-21 03:30:41.460643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.141 [2024-11-21 03:30:41.462340] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:54.141 [2024-11-21 03:30:41.465979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.466032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:54.142 [2024-11-21 03:30:41.466043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:21:54.142 [2024-11-21 03:30:41.466060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.466142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.466152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:54.142 [2024-11-21 03:30:41.466162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:54.142 [2024-11-21 03:30:41.466173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.474076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.474110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.142 [2024-11-21 03:30:41.474126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:21:54.142 [2024-11-21 03:30:41.474134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.474233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.474249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.142 [2024-11-21 03:30:41.474261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:54.142 [2024-11-21 03:30:41.474270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.474320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.474331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:54.142 [2024-11-21 03:30:41.474339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:54.142 [2024-11-21 03:30:41.474350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.474378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:54.142 [2024-11-21 03:30:41.476429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.476457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.142 [2024-11-21 03:30:41.476476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:21:54.142 [2024-11-21 03:30:41.476485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.476519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.476528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:54.142 [2024-11-21 03:30:41.476537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:54.142 [2024-11-21 03:30:41.476551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.476573] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:54.142 [2024-11-21 03:30:41.476593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:54.142 [2024-11-21 03:30:41.476630] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:54.142 [2024-11-21 03:30:41.476649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:54.142 [2024-11-21 03:30:41.476754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:54.142 [2024-11-21 03:30:41.476765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:54.142 [2024-11-21 03:30:41.476779] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:54.142 [2024-11-21 03:30:41.476794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:54.142 [2024-11-21 03:30:41.476803] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:54.142 [2024-11-21 03:30:41.476811] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:54.142 [2024-11-21 03:30:41.476819] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:54.142 [2024-11-21 03:30:41.476831] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:54.142 [2024-11-21 03:30:41.476839] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:54.142 [2024-11-21 03:30:41.476847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.476855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:54.142 [2024-11-21 03:30:41.476866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:54.142 [2024-11-21 03:30:41.476873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.476976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.142 [2024-11-21 03:30:41.476986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:54.142 [2024-11-21 03:30:41.476994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:54.142 [2024-11-21 03:30:41.477002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.142 [2024-11-21 03:30:41.477099] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:54.142 [2024-11-21 03:30:41.477114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:54.142 [2024-11-21 03:30:41.477123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:54.142 [2024-11-21 03:30:41.477150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:54.142 [2024-11-21 03:30:41.477178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.142 [2024-11-21 03:30:41.477194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:54.142 [2024-11-21 03:30:41.477200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:54.142 [2024-11-21 03:30:41.477206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.142 [2024-11-21 03:30:41.477213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:54.142 [2024-11-21 03:30:41.477219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:54.142 [2024-11-21 03:30:41.477227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:54.142 [2024-11-21 03:30:41.477242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:54.142 [2024-11-21 03:30:41.477262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:54.142 [2024-11-21 03:30:41.477281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:54.142 [2024-11-21 03:30:41.477307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:54.142 [2024-11-21 03:30:41.477326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:54.142 [2024-11-21 03:30:41.477346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.142 [2024-11-21 03:30:41.477359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:54.142 [2024-11-21 03:30:41.477365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:54.142 [2024-11-21 03:30:41.477371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.142 [2024-11-21 03:30:41.477377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:54.142 [2024-11-21 03:30:41.477384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:54.142 [2024-11-21 03:30:41.477390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:54.142 [2024-11-21 03:30:41.477406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:54.142 [2024-11-21 03:30:41.477413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477420] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:54.142 [2024-11-21 03:30:41.477431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:54.142 [2024-11-21 03:30:41.477442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.142 [2024-11-21 03:30:41.477457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:54.142 [2024-11-21 03:30:41.477467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:54.142 [2024-11-21 03:30:41.477474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:54.142 [2024-11-21 03:30:41.477481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:54.142 [2024-11-21 03:30:41.477487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:54.142 [2024-11-21 03:30:41.477494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:54.142 [2024-11-21 03:30:41.477503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:54.143 [2024-11-21 03:30:41.477512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:54.143 [2024-11-21 03:30:41.477532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:54.143 [2024-11-21 03:30:41.477541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:54.143 [2024-11-21 03:30:41.477548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:54.143 [2024-11-21 03:30:41.477556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:54.143 [2024-11-21 03:30:41.477562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:54.143 [2024-11-21 03:30:41.477569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:54.143 [2024-11-21 03:30:41.477576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:54.143 [2024-11-21 03:30:41.477583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:54.143 [2024-11-21 03:30:41.477590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:54.143 [2024-11-21 03:30:41.477624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:54.143 [2024-11-21 03:30:41.477632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.143 [2024-11-21 03:30:41.477648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:54.143 [2024-11-21 03:30:41.477657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:54.143 [2024-11-21 03:30:41.477664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:54.143 [2024-11-21 03:30:41.477672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.477680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:54.143 [2024-11-21 03:30:41.477687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:21:54.143 [2024-11-21 03:30:41.477695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.491415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.491455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:54.143 [2024-11-21 03:30:41.491466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.675 ms 00:21:54.143 [2024-11-21 03:30:41.491476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.491563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.491573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:54.143 [2024-11-21 03:30:41.491583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:54.143 [2024-11-21 03:30:41.491593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.512693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.512756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.143 [2024-11-21 03:30:41.512775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.042 ms 00:21:54.143 [2024-11-21 03:30:41.512788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.512851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.512867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.143 [2024-11-21 03:30:41.512887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.143 [2024-11-21 03:30:41.512922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.513551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.513595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.143 [2024-11-21 03:30:41.513610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:21:54.143 [2024-11-21 03:30:41.513633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.513844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.513859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.143 [2024-11-21 03:30:41.513878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:21:54.143 [2024-11-21 03:30:41.513890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.522559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.522598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.143 [2024-11-21 03:30:41.522609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.617 ms 00:21:54.143 [2024-11-21 03:30:41.522616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.526500] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:54.143 [2024-11-21 03:30:41.526544] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:54.143 [2024-11-21 03:30:41.526560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.526569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:54.143 [2024-11-21 03:30:41.526578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.859 ms 00:21:54.143 [2024-11-21 03:30:41.526585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.542636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.542695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:54.143 [2024-11-21 03:30:41.542707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.999 ms 00:21:54.143 [2024-11-21 03:30:41.542716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.545650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.545688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:54.143 [2024-11-21 03:30:41.545698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:21:54.143 [2024-11-21 03:30:41.545706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.548058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.548095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:54.143 [2024-11-21 03:30:41.548105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:21:54.143 [2024-11-21 03:30:41.548122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.548461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.548482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:54.143 [2024-11-21 03:30:41.548493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:21:54.143 [2024-11-21 03:30:41.548501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.572118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.572168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:54.143 [2024-11-21 03:30:41.572180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.596 ms 00:21:54.143 [2024-11-21 03:30:41.572189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.580151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:54.143 [2024-11-21 03:30:41.583173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.583208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:54.143 [2024-11-21 03:30:41.583232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.936 ms 00:21:54.143 [2024-11-21 03:30:41.583245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.583318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.583329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:54.143 [2024-11-21 03:30:41.583339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:54.143 [2024-11-21 03:30:41.583347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.583420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.583431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:54.143 [2024-11-21 03:30:41.583444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:54.143 [2024-11-21 03:30:41.583452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.583472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.583481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:54.143 [2024-11-21 03:30:41.583494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:54.143 [2024-11-21 03:30:41.583504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.583538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:54.143 [2024-11-21 03:30:41.583549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.583562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:54.143 [2024-11-21 03:30:41.583573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:54.143 [2024-11-21 03:30:41.583582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.143 [2024-11-21 03:30:41.588994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.143 [2024-11-21 03:30:41.589035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:54.143 [2024-11-21 03:30:41.589046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.395 ms 00:21:54.144 [2024-11-21 03:30:41.589055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.144 [2024-11-21 03:30:41.589141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.144 [2024-11-21 03:30:41.589159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:54.144 [2024-11-21 03:30:41.589169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:54.144 [2024-11-21 03:30:41.589180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.144 [2024-11-21 03:30:41.590449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.002 ms, result 0 00:21:55.531  [2024-11-21T03:30:44.042Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T03:30:45.029Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-21T03:30:45.978Z] Copying: 53/1024 [MB] (16 MBps) [2024-11-21T03:30:46.922Z] Copying: 67/1024 [MB] (14 MBps) [2024-11-21T03:30:47.872Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-21T03:30:48.816Z] Copying: 91/1024 [MB] (12 MBps) [2024-11-21T03:30:50.203Z] Copying: 103/1024 [MB] (12 MBps) [2024-11-21T03:30:50.776Z] Copying: 114/1024 [MB] (10 MBps) [2024-11-21T03:30:52.163Z] Copying: 127/1024 [MB] (13 MBps) [2024-11-21T03:30:53.109Z] Copying: 141/1024 [MB] (13 MBps) [2024-11-21T03:30:54.050Z] Copying: 157/1024 [MB] (16 MBps) [2024-11-21T03:30:54.995Z] Copying: 175/1024 [MB] (17 MBps) [2024-11-21T03:30:55.939Z] Copying: 189/1024 [MB] (14 MBps) [2024-11-21T03:30:56.884Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-21T03:30:57.830Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-21T03:30:58.775Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-21T03:31:00.182Z] Copying: 233/1024 [MB] (11 MBps) [2024-11-21T03:31:01.128Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-21T03:31:02.072Z] Copying: 257/1024 [MB] (13 MBps) [2024-11-21T03:31:03.018Z] Copying: 270/1024 [MB] (12 MBps) [2024-11-21T03:31:03.963Z] Copying: 289/1024 [MB] (18 MBps) [2024-11-21T03:31:04.907Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-21T03:31:05.851Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-21T03:31:06.796Z] Copying: 333/1024 [MB] (13 MBps) [2024-11-21T03:31:08.186Z] Copying: 344/1024 [MB] (11 MBps) [2024-11-21T03:31:09.130Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-21T03:31:10.072Z] Copying: 379/1024 [MB] (17 MBps) [2024-11-21T03:31:11.018Z] Copying: 394/1024 [MB] (15 MBps) [2024-11-21T03:31:11.964Z] Copying: 412/1024 [MB] (18 MBps) [2024-11-21T03:31:12.910Z] Copying: 427/1024 [MB] (14 MBps) [2024-11-21T03:31:13.919Z] Copying: 444/1024 [MB] (16 MBps) [2024-11-21T03:31:14.889Z] Copying: 462/1024 [MB] (18 MBps) [2024-11-21T03:31:15.832Z] Copying: 487/1024 [MB] (25 MBps) [2024-11-21T03:31:16.776Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-21T03:31:18.162Z] Copying: 517/1024 [MB] (12 MBps) [2024-11-21T03:31:19.107Z] Copying: 528/1024 [MB] (11 MBps) [2024-11-21T03:31:20.053Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-21T03:31:20.996Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-21T03:31:21.940Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-21T03:31:22.882Z] Copying: 577/1024 [MB] (16 MBps) [2024-11-21T03:31:23.827Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-21T03:31:25.214Z] Copying: 613/1024 [MB] (18 MBps) [2024-11-21T03:31:25.785Z] Copying: 630/1024 [MB] (17 MBps) [2024-11-21T03:31:27.169Z] Copying: 648/1024 [MB] (18 MBps) [2024-11-21T03:31:28.113Z] Copying: 665/1024 [MB] (17 MBps) [2024-11-21T03:31:29.056Z] Copying: 682/1024 [MB] (16 MBps) [2024-11-21T03:31:30.000Z] Copying: 698/1024 [MB] (16 MBps) [2024-11-21T03:31:30.943Z] Copying: 713/1024 [MB] (14 MBps) [2024-11-21T03:31:31.888Z] Copying: 731/1024 [MB] (17 MBps) [2024-11-21T03:31:32.832Z] Copying: 748/1024 [MB] (17 MBps) [2024-11-21T03:31:34.214Z] Copying: 764/1024 [MB] (16 MBps) [2024-11-21T03:31:34.782Z] Copying: 783/1024 [MB] (18 MBps) [2024-11-21T03:31:36.166Z] Copying: 801/1024 [MB] (18 MBps) [2024-11-21T03:31:37.110Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-21T03:31:38.053Z] Copying: 823/1024 [MB] (10 MBps) [2024-11-21T03:31:38.995Z] Copying: 838/1024 [MB] (14 MBps) [2024-11-21T03:31:39.937Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-21T03:31:40.879Z] Copying: 859/1024 [MB] (10 MBps) [2024-11-21T03:31:41.822Z] Copying: 877/1024 [MB] (17 MBps) [2024-11-21T03:31:43.288Z] Copying: 891/1024 [MB] (13 MBps) [2024-11-21T03:31:43.871Z] Copying: 903/1024 [MB] (12 MBps) [2024-11-21T03:31:44.816Z] Copying: 922/1024 [MB] (19 MBps) [2024-11-21T03:31:46.209Z] Copying: 941/1024 [MB] (18 MBps) [2024-11-21T03:31:46.782Z] Copying: 959/1024 [MB] (18 MBps) [2024-11-21T03:31:48.170Z] Copying: 972/1024 [MB] (12 MBps) [2024-11-21T03:31:49.116Z] Copying: 996/1024 [MB] (23 MBps) [2024-11-21T03:31:50.058Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-21T03:31:50.320Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-21T03:31:50.584Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-21 03:31:50.505673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.505757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.019 [2024-11-21 03:31:50.505775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:03.019 [2024-11-21 03:31:50.505791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.505823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.019 [2024-11-21 03:31:50.506664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.506710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.019 [2024-11-21 03:31:50.506724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:23:03.019 [2024-11-21 03:31:50.506734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.507002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.507020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.019 [2024-11-21 03:31:50.507031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:23:03.019 [2024-11-21 03:31:50.507048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.510523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.510544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.019 [2024-11-21 03:31:50.510554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:23:03.019 [2024-11-21 03:31:50.510563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.517781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.517819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:03.019 [2024-11-21 03:31:50.517831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.198 ms 00:23:03.019 [2024-11-21 03:31:50.517839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.520953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.521002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.019 [2024-11-21 03:31:50.521013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:23:03.019 [2024-11-21 03:31:50.521021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.526468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.526522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.019 [2024-11-21 03:31:50.526534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.397 ms 00:23:03.019 [2024-11-21 03:31:50.526543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.526663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.526674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:03.019 [2024-11-21 03:31:50.526683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:03.019 [2024-11-21 03:31:50.526691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.530547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.530609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:03.019 [2024-11-21 03:31:50.530622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:23:03.019 [2024-11-21 03:31:50.530647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.534291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.534344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:03.019 [2024-11-21 03:31:50.534355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:23:03.019 [2024-11-21 03:31:50.534362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.536926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.536963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:03.019 [2024-11-21 03:31:50.536974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:23:03.019 [2024-11-21 03:31:50.536982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.540170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.019 [2024-11-21 03:31:50.540231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:03.019 [2024-11-21 03:31:50.540244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.460 ms 00:23:03.019 [2024-11-21 03:31:50.540252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.019 [2024-11-21 03:31:50.540298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:03.020 [2024-11-21 03:31:50.540315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.540999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.541007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.541014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.541022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:03.020 [2024-11-21 03:31:50.541030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:03.021 [2024-11-21 03:31:50.541140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:03.021 [2024-11-21 03:31:50.541148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:23:03.021 [2024-11-21 03:31:50.541165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:03.021 [2024-11-21 03:31:50.541174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:03.021 [2024-11-21 03:31:50.541182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:03.021 [2024-11-21 03:31:50.541191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:03.021 [2024-11-21 03:31:50.541199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:03.021 [2024-11-21 03:31:50.541208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:03.021 [2024-11-21 03:31:50.541216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:03.021 [2024-11-21 03:31:50.541223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:03.021 [2024-11-21 03:31:50.541230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:03.021 [2024-11-21 03:31:50.541238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.021 [2024-11-21 03:31:50.541260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:03.021 [2024-11-21 03:31:50.541270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:23:03.021 [2024-11-21 03:31:50.541284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.543779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.021 [2024-11-21 03:31:50.543824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:03.021 [2024-11-21 03:31:50.543835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:23:03.021 [2024-11-21 03:31:50.543843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.543985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.021 [2024-11-21 03:31:50.543995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:03.021 [2024-11-21 03:31:50.544004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:23:03.021 [2024-11-21 03:31:50.544012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.552164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.552210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:03.021 [2024-11-21 03:31:50.552230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.552239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.552309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.552319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:03.021 [2024-11-21 03:31:50.552327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.552337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.552405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.552416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:03.021 [2024-11-21 03:31:50.552424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.552432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.552449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.552464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:03.021 [2024-11-21 03:31:50.552472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.552480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.566701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.566747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:03.021 [2024-11-21 03:31:50.566759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.566776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:03.021 [2024-11-21 03:31:50.577526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:03.021 [2024-11-21 03:31:50.577602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:03.021 [2024-11-21 03:31:50.577670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:03.021 [2024-11-21 03:31:50.577770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:03.021 [2024-11-21 03:31:50.577828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.577885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.577940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.021 [2024-11-21 03:31:50.577950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.577959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.578018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.021 [2024-11-21 03:31:50.578030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.021 [2024-11-21 03:31:50.578042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.021 [2024-11-21 03:31:50.578051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.021 [2024-11-21 03:31:50.578177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.488 ms, result 0 00:23:03.282 00:23:03.282 00:23:03.282 03:31:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.829 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:05.829 03:31:53 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:05.829 [2024-11-21 03:31:53.130769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:23:05.829 [2024-11-21 03:31:53.130946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91444 ] 00:23:05.829 [2024-11-21 03:31:53.267058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.829 [2024-11-21 03:31:53.297084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.829 [2024-11-21 03:31:53.325410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.092 [2024-11-21 03:31:53.435384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:06.092 [2024-11-21 03:31:53.435471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:06.092 [2024-11-21 03:31:53.597567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.597635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:06.092 [2024-11-21 03:31:53.597654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:06.092 [2024-11-21 03:31:53.597662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.597727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.597738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.092 [2024-11-21 03:31:53.597748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:06.092 [2024-11-21 03:31:53.597756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.597782] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:06.092 [2024-11-21 03:31:53.598439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:06.092 [2024-11-21 03:31:53.598496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.598507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.092 [2024-11-21 03:31:53.598521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:23:06.092 [2024-11-21 03:31:53.598530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.600325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:06.092 [2024-11-21 03:31:53.604244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.604306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:06.092 [2024-11-21 03:31:53.604318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:23:06.092 [2024-11-21 03:31:53.604333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.604416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.604426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:06.092 [2024-11-21 03:31:53.604436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:06.092 [2024-11-21 03:31:53.604444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.612609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.612656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.092 [2024-11-21 03:31:53.612671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.122 ms 00:23:06.092 [2024-11-21 03:31:53.612678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.612783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.612792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.092 [2024-11-21 03:31:53.612803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:06.092 [2024-11-21 03:31:53.612816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.612874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.612890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:06.092 [2024-11-21 03:31:53.612918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:06.092 [2024-11-21 03:31:53.612929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.612957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:06.092 [2024-11-21 03:31:53.615067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.615110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.092 [2024-11-21 03:31:53.615129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:23:06.092 [2024-11-21 03:31:53.615137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.615172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.615185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:06.092 [2024-11-21 03:31:53.615193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:06.092 [2024-11-21 03:31:53.615207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.615229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:06.092 [2024-11-21 03:31:53.615250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:06.092 [2024-11-21 03:31:53.615288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:06.092 [2024-11-21 03:31:53.615304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:06.092 [2024-11-21 03:31:53.615409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:06.092 [2024-11-21 03:31:53.615420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:06.092 [2024-11-21 03:31:53.615435] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:06.092 [2024-11-21 03:31:53.615450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:06.092 [2024-11-21 03:31:53.615459] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:06.092 [2024-11-21 03:31:53.615467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:06.092 [2024-11-21 03:31:53.615475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:06.092 [2024-11-21 03:31:53.615483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:06.092 [2024-11-21 03:31:53.615494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:06.092 [2024-11-21 03:31:53.615502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.615510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:06.092 [2024-11-21 03:31:53.615521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:06.092 [2024-11-21 03:31:53.615529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.615613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.092 [2024-11-21 03:31:53.615623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:06.092 [2024-11-21 03:31:53.615630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:06.092 [2024-11-21 03:31:53.615637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.092 [2024-11-21 03:31:53.615734] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:06.092 [2024-11-21 03:31:53.615753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:06.092 [2024-11-21 03:31:53.615762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.092 [2024-11-21 03:31:53.615772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.092 [2024-11-21 03:31:53.615781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:06.092 [2024-11-21 03:31:53.615790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:06.092 [2024-11-21 03:31:53.615798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:06.093 [2024-11-21 03:31:53.615806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:06.093 [2024-11-21 03:31:53.615821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:06.093 [2024-11-21 03:31:53.615831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.093 [2024-11-21 03:31:53.615839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:06.093 [2024-11-21 03:31:53.615847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:06.093 [2024-11-21 03:31:53.615855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.093 [2024-11-21 03:31:53.615863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:06.093 [2024-11-21 03:31:53.615871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:06.093 [2024-11-21 03:31:53.615879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.093 [2024-11-21 03:31:53.615887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:06.093 [2024-11-21 03:31:53.615914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:06.093 [2024-11-21 03:31:53.615923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.093 [2024-11-21 03:31:53.615931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:06.093 [2024-11-21 03:31:53.615940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:06.093 [2024-11-21 03:31:53.615948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.093 [2024-11-21 03:31:53.615956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:06.093 [2024-11-21 03:31:53.615964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:06.093 [2024-11-21 03:31:53.615972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.093 [2024-11-21 03:31:53.615984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:06.093 [2024-11-21 03:31:53.615993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.093 [2024-11-21 03:31:53.616007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:06.093 [2024-11-21 03:31:53.616015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.093 [2024-11-21 03:31:53.616031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:06.093 [2024-11-21 03:31:53.616038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.093 [2024-11-21 03:31:53.616055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:06.093 [2024-11-21 03:31:53.616062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:06.093 [2024-11-21 03:31:53.616070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.093 [2024-11-21 03:31:53.616078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:06.093 [2024-11-21 03:31:53.616086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:06.093 [2024-11-21 03:31:53.616094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:06.093 [2024-11-21 03:31:53.616113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:06.093 [2024-11-21 03:31:53.616122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616130] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:06.093 [2024-11-21 03:31:53.616148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:06.093 [2024-11-21 03:31:53.616156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.093 [2024-11-21 03:31:53.616168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.093 [2024-11-21 03:31:53.616177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:06.093 [2024-11-21 03:31:53.616184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:06.093 [2024-11-21 03:31:53.616191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:06.093 [2024-11-21 03:31:53.616199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:06.093 [2024-11-21 03:31:53.616205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:06.093 [2024-11-21 03:31:53.616213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:06.093 [2024-11-21 03:31:53.616222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:06.093 [2024-11-21 03:31:53.616231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:06.093 [2024-11-21 03:31:53.616247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:06.093 [2024-11-21 03:31:53.616256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:06.093 [2024-11-21 03:31:53.616264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:06.093 [2024-11-21 03:31:53.616271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:06.093 [2024-11-21 03:31:53.616278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:06.093 [2024-11-21 03:31:53.616285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:06.093 [2024-11-21 03:31:53.616293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:06.093 [2024-11-21 03:31:53.616301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:06.093 [2024-11-21 03:31:53.616308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:06.093 [2024-11-21 03:31:53.616345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:06.093 [2024-11-21 03:31:53.616352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:06.093 [2024-11-21 03:31:53.616367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:06.093 [2024-11-21 03:31:53.616376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:06.093 [2024-11-21 03:31:53.616384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:06.093 [2024-11-21 03:31:53.616392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.093 [2024-11-21 03:31:53.616399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:06.093 [2024-11-21 03:31:53.616407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:23:06.093 [2024-11-21 03:31:53.616415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.093 [2024-11-21 03:31:53.630640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.093 [2024-11-21 03:31:53.630693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.093 [2024-11-21 03:31:53.630714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.179 ms 00:23:06.093 [2024-11-21 03:31:53.630725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.093 [2024-11-21 03:31:53.630817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.093 [2024-11-21 03:31:53.630827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:06.093 [2024-11-21 03:31:53.630835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:06.094 [2024-11-21 03:31:53.630844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.094 [2024-11-21 03:31:53.650338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.094 [2024-11-21 03:31:53.650403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.094 [2024-11-21 03:31:53.650420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.436 ms 00:23:06.094 [2024-11-21 03:31:53.650431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.094 [2024-11-21 03:31:53.650489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.094 [2024-11-21 03:31:53.650505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.094 [2024-11-21 03:31:53.650524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.094 [2024-11-21 03:31:53.650537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.094 [2024-11-21 03:31:53.651128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.094 [2024-11-21 03:31:53.651174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.094 [2024-11-21 03:31:53.651189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:23:06.094 [2024-11-21 03:31:53.651200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.094 [2024-11-21 03:31:53.651395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.094 [2024-11-21 03:31:53.651408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.094 [2024-11-21 03:31:53.651420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:23:06.094 [2024-11-21 03:31:53.651431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.659511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.659571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.356 [2024-11-21 03:31:53.659586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:23:06.356 [2024-11-21 03:31:53.659596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.663475] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:06.356 [2024-11-21 03:31:53.663526] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:06.356 [2024-11-21 03:31:53.663543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.663551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:06.356 [2024-11-21 03:31:53.663560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:23:06.356 [2024-11-21 03:31:53.663567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.679443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.679489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:06.356 [2024-11-21 03:31:53.679503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:23:06.356 [2024-11-21 03:31:53.679511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.682592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.682642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:06.356 [2024-11-21 03:31:53.682653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:23:06.356 [2024-11-21 03:31:53.682660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.685374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.685556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:06.356 [2024-11-21 03:31:53.685576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:23:06.356 [2024-11-21 03:31:53.685595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.685959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.685978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.356 [2024-11-21 03:31:53.685989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:06.356 [2024-11-21 03:31:53.686026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.711301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.711362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:06.356 [2024-11-21 03:31:53.711374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.250 ms 00:23:06.356 [2024-11-21 03:31:53.711383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.719386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:06.356 [2024-11-21 03:31:53.722562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.722605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:06.356 [2024-11-21 03:31:53.722624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.131 ms 00:23:06.356 [2024-11-21 03:31:53.722633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.722711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.356 [2024-11-21 03:31:53.722722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:06.356 [2024-11-21 03:31:53.722731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:06.356 [2024-11-21 03:31:53.722739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.356 [2024-11-21 03:31:53.722804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.357 [2024-11-21 03:31:53.722814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.357 [2024-11-21 03:31:53.722826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:06.357 [2024-11-21 03:31:53.722835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.357 [2024-11-21 03:31:53.722854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.357 [2024-11-21 03:31:53.722862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.357 [2024-11-21 03:31:53.722871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:06.357 [2024-11-21 03:31:53.722882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.357 [2024-11-21 03:31:53.723015] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:06.357 [2024-11-21 03:31:53.723027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.357 [2024-11-21 03:31:53.723035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:06.357 [2024-11-21 03:31:53.723046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:06.357 [2024-11-21 03:31:53.723054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.357 [2024-11-21 03:31:53.728141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.357 [2024-11-21 03:31:53.728189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.357 [2024-11-21 03:31:53.728200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:23:06.357 [2024-11-21 03:31:53.728208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.357 [2024-11-21 03:31:53.728292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.357 [2024-11-21 03:31:53.728305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.357 [2024-11-21 03:31:53.728315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:06.357 [2024-11-21 03:31:53.728326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.357 [2024-11-21 03:31:53.729373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.366 ms, result 0 00:23:07.298  [2024-11-21T03:31:55.804Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-21T03:31:56.747Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-21T03:31:58.133Z] Copying: 48/1024 [MB] (19 MBps) [2024-11-21T03:31:59.078Z] Copying: 69/1024 [MB] (21 MBps) [2024-11-21T03:32:00.020Z] Copying: 93/1024 [MB] (23 MBps) [2024-11-21T03:32:00.964Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-21T03:32:01.909Z] Copying: 152/1024 [MB] (35 MBps) [2024-11-21T03:32:02.853Z] Copying: 171/1024 [MB] (19 MBps) [2024-11-21T03:32:03.799Z] Copying: 205/1024 [MB] (34 MBps) [2024-11-21T03:32:04.745Z] Copying: 221/1024 [MB] (15 MBps) [2024-11-21T03:32:06.165Z] Copying: 239/1024 [MB] (18 MBps) [2024-11-21T03:32:07.111Z] Copying: 256/1024 [MB] (17 MBps) [2024-11-21T03:32:08.054Z] Copying: 272/1024 [MB] (15 MBps) [2024-11-21T03:32:09.000Z] Copying: 286/1024 [MB] (14 MBps) [2024-11-21T03:32:09.944Z] Copying: 301/1024 [MB] (14 MBps) [2024-11-21T03:32:10.887Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-21T03:32:11.831Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-21T03:32:12.796Z] Copying: 344/1024 [MB] (12 MBps) [2024-11-21T03:32:14.182Z] Copying: 358/1024 [MB] (13 MBps) [2024-11-21T03:32:14.755Z] Copying: 368/1024 [MB] (10 MBps) [2024-11-21T03:32:16.142Z] Copying: 381/1024 [MB] (13 MBps) [2024-11-21T03:32:17.085Z] Copying: 417/1024 [MB] (36 MBps) [2024-11-21T03:32:18.034Z] Copying: 452/1024 [MB] (34 MBps) [2024-11-21T03:32:18.978Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-21T03:32:19.919Z] Copying: 478/1024 [MB] (15 MBps) [2024-11-21T03:32:20.862Z] Copying: 508/1024 [MB] (30 MBps) [2024-11-21T03:32:21.808Z] Copying: 525/1024 [MB] (16 MBps) [2024-11-21T03:32:22.754Z] Copying: 558/1024 [MB] (33 MBps) [2024-11-21T03:32:24.141Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-21T03:32:25.087Z] Copying: 590/1024 [MB] (15 MBps) [2024-11-21T03:32:26.030Z] Copying: 609/1024 [MB] (19 MBps) [2024-11-21T03:32:26.975Z] Copying: 645/1024 [MB] (35 MBps) [2024-11-21T03:32:27.919Z] Copying: 673/1024 [MB] (28 MBps) [2024-11-21T03:32:28.865Z] Copying: 691/1024 [MB] (18 MBps) [2024-11-21T03:32:29.808Z] Copying: 705/1024 [MB] (14 MBps) [2024-11-21T03:32:30.752Z] Copying: 727/1024 [MB] (21 MBps) [2024-11-21T03:32:32.138Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-21T03:32:33.082Z] Copying: 751/1024 [MB] (12 MBps) [2024-11-21T03:32:34.027Z] Copying: 769/1024 [MB] (18 MBps) [2024-11-21T03:32:34.969Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-21T03:32:35.914Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-21T03:32:36.858Z] Copying: 824/1024 [MB] (33 MBps) [2024-11-21T03:32:37.803Z] Copying: 860/1024 [MB] (35 MBps) [2024-11-21T03:32:38.747Z] Copying: 883/1024 [MB] (23 MBps) [2024-11-21T03:32:40.143Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-21T03:32:40.780Z] Copying: 912/1024 [MB] (17 MBps) [2024-11-21T03:32:42.167Z] Copying: 933/1024 [MB] (20 MBps) [2024-11-21T03:32:43.112Z] Copying: 951/1024 [MB] (18 MBps) [2024-11-21T03:32:44.056Z] Copying: 973/1024 [MB] (21 MBps) [2024-11-21T03:32:45.002Z] Copying: 1008/1024 [MB] (35 MBps) [2024-11-21T03:32:45.002Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-21 03:32:44.644568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.644637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:57.437 [2024-11-21 03:32:44.644658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:57.437 [2024-11-21 03:32:44.644671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.644700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:57.437 [2024-11-21 03:32:44.645535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.645575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:57.437 [2024-11-21 03:32:44.645586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:23:57.437 [2024-11-21 03:32:44.645595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.648675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.648794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:57.437 [2024-11-21 03:32:44.648830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:23:57.437 [2024-11-21 03:32:44.648867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.671323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.671519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:57.437 [2024-11-21 03:32:44.671541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.394 ms 00:23:57.437 [2024-11-21 03:32:44.671550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.677817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.677864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:57.437 [2024-11-21 03:32:44.677876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:23:57.437 [2024-11-21 03:32:44.677884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.680616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.680668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:57.437 [2024-11-21 03:32:44.680679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:23:57.437 [2024-11-21 03:32:44.680687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.686188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.686241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:57.437 [2024-11-21 03:32:44.686252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.457 ms 00:23:57.437 [2024-11-21 03:32:44.686261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.688128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.688293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:57.437 [2024-11-21 03:32:44.688313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:23:57.437 [2024-11-21 03:32:44.688336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.691245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.691295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:57.437 [2024-11-21 03:32:44.691305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:23:57.437 [2024-11-21 03:32:44.691325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.694149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.694196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:57.437 [2024-11-21 03:32:44.694206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.779 ms 00:23:57.437 [2024-11-21 03:32:44.694213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.696391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.696442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:57.437 [2024-11-21 03:32:44.696452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:23:57.437 [2024-11-21 03:32:44.696459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.698747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.437 [2024-11-21 03:32:44.698947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:57.437 [2024-11-21 03:32:44.698966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:23:57.437 [2024-11-21 03:32:44.698973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.437 [2024-11-21 03:32:44.699072] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:57.437 [2024-11-21 03:32:44.699106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:23:57.437 [2024-11-21 03:32:44.699117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:57.437 [2024-11-21 03:32:44.699358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:57.438 [2024-11-21 03:32:44.699874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:57.438 [2024-11-21 03:32:44.699883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:23:57.438 [2024-11-21 03:32:44.699891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:23:57.438 [2024-11-21 03:32:44.699928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:23:57.438 [2024-11-21 03:32:44.699936] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:23:57.438 [2024-11-21 03:32:44.699945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:23:57.438 [2024-11-21 03:32:44.699953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:57.438 [2024-11-21 03:32:44.699961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:57.438 [2024-11-21 03:32:44.699969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:57.438 [2024-11-21 03:32:44.699975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:57.438 [2024-11-21 03:32:44.699982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:57.438 [2024-11-21 03:32:44.699989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.438 [2024-11-21 03:32:44.700011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:57.438 [2024-11-21 03:32:44.700020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:23:57.438 [2024-11-21 03:32:44.700034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.438 [2024-11-21 03:32:44.702462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.438 [2024-11-21 03:32:44.702488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:57.438 [2024-11-21 03:32:44.702500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:23:57.439 [2024-11-21 03:32:44.702518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.702647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.439 [2024-11-21 03:32:44.702665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:57.439 [2024-11-21 03:32:44.702674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:57.439 [2024-11-21 03:32:44.702682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.710568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.710619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.439 [2024-11-21 03:32:44.710631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.710645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.710702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.710711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.439 [2024-11-21 03:32:44.710719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.710727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.710797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.710807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.439 [2024-11-21 03:32:44.710816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.710824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.710842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.710851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.439 [2024-11-21 03:32:44.710859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.710867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.724290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.724349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.439 [2024-11-21 03:32:44.724361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.724373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.439 [2024-11-21 03:32:44.734599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.734607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.439 [2024-11-21 03:32:44.734673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.734682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.439 [2024-11-21 03:32:44.734746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.734754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.439 [2024-11-21 03:32:44.734839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.734853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:57.439 [2024-11-21 03:32:44.734929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.734937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.734981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.734994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.439 [2024-11-21 03:32:44.735003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.735010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.735062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.439 [2024-11-21 03:32:44.735075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.439 [2024-11-21 03:32:44.735084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.439 [2024-11-21 03:32:44.735095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.439 [2024-11-21 03:32:44.735228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.632 ms, result 0 00:23:58.380 00:23:58.380 00:23:58.380 03:32:45 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:58.380 [2024-11-21 03:32:45.707141] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:23:58.380 [2024-11-21 03:32:45.707277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91982 ] 00:23:58.380 [2024-11-21 03:32:45.842930] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:58.380 [2024-11-21 03:32:45.873115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.380 [2024-11-21 03:32:45.893574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.642 [2024-11-21 03:32:45.987467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.642 [2024-11-21 03:32:45.987534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.642 [2024-11-21 03:32:46.147318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.642 [2024-11-21 03:32:46.147549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.642 [2024-11-21 03:32:46.147576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:58.642 [2024-11-21 03:32:46.147594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.642 [2024-11-21 03:32:46.147665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.642 [2024-11-21 03:32:46.147676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.643 [2024-11-21 03:32:46.147685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:58.643 [2024-11-21 03:32:46.147698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.147723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.643 [2024-11-21 03:32:46.148146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.643 [2024-11-21 03:32:46.148181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.148190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.643 [2024-11-21 03:32:46.148205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:23:58.643 [2024-11-21 03:32:46.148215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.149866] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:58.643 [2024-11-21 03:32:46.153738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.153791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:58.643 [2024-11-21 03:32:46.153803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.874 ms 00:23:58.643 [2024-11-21 03:32:46.153821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.153915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.153926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:58.643 [2024-11-21 03:32:46.153935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:58.643 [2024-11-21 03:32:46.153943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.162206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.162248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.643 [2024-11-21 03:32:46.162263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.216 ms 00:23:58.643 [2024-11-21 03:32:46.162272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.162372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.162382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.643 [2024-11-21 03:32:46.162395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:58.643 [2024-11-21 03:32:46.162403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.162470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.162480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:58.643 [2024-11-21 03:32:46.162490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:58.643 [2024-11-21 03:32:46.162506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.162528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:58.643 [2024-11-21 03:32:46.164480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.164517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.643 [2024-11-21 03:32:46.164529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:23:58.643 [2024-11-21 03:32:46.164538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.164573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.164582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:58.643 [2024-11-21 03:32:46.164591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:58.643 [2024-11-21 03:32:46.164609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.164630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:58.643 [2024-11-21 03:32:46.164650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:58.643 [2024-11-21 03:32:46.164686] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:58.643 [2024-11-21 03:32:46.164702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:58.643 [2024-11-21 03:32:46.164807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:58.643 [2024-11-21 03:32:46.164818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:58.643 [2024-11-21 03:32:46.164831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:58.643 [2024-11-21 03:32:46.164842] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:58.643 [2024-11-21 03:32:46.164851] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:58.643 [2024-11-21 03:32:46.164859] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:58.643 [2024-11-21 03:32:46.164867] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:58.643 [2024-11-21 03:32:46.164874] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:58.643 [2024-11-21 03:32:46.164882] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:58.643 [2024-11-21 03:32:46.164889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.164914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:58.643 [2024-11-21 03:32:46.164925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:23:58.643 [2024-11-21 03:32:46.164932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.165017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.643 [2024-11-21 03:32:46.165026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:58.643 [2024-11-21 03:32:46.165034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:58.643 [2024-11-21 03:32:46.165042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.643 [2024-11-21 03:32:46.165139] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:58.643 [2024-11-21 03:32:46.165149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:58.643 [2024-11-21 03:32:46.165163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:58.643 [2024-11-21 03:32:46.165186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:58.643 [2024-11-21 03:32:46.165215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.643 [2024-11-21 03:32:46.165231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:58.643 [2024-11-21 03:32:46.165239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:58.643 [2024-11-21 03:32:46.165247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.643 [2024-11-21 03:32:46.165255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:58.643 [2024-11-21 03:32:46.165262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:58.643 [2024-11-21 03:32:46.165269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:58.643 [2024-11-21 03:32:46.165284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:58.643 [2024-11-21 03:32:46.165305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:58.643 [2024-11-21 03:32:46.165326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:58.643 [2024-11-21 03:32:46.165352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:58.643 [2024-11-21 03:32:46.165373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.643 [2024-11-21 03:32:46.165386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:58.643 [2024-11-21 03:32:46.165393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.643 [2024-11-21 03:32:46.165406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:58.643 [2024-11-21 03:32:46.165413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:58.643 [2024-11-21 03:32:46.165419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.643 [2024-11-21 03:32:46.165426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:58.643 [2024-11-21 03:32:46.165432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:58.643 [2024-11-21 03:32:46.165439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:58.643 [2024-11-21 03:32:46.165455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:58.643 [2024-11-21 03:32:46.165463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.643 [2024-11-21 03:32:46.165469] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:58.643 [2024-11-21 03:32:46.165481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:58.643 [2024-11-21 03:32:46.165489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.644 [2024-11-21 03:32:46.165496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.644 [2024-11-21 03:32:46.165504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:58.644 [2024-11-21 03:32:46.165512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:58.644 [2024-11-21 03:32:46.165518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:58.644 [2024-11-21 03:32:46.165525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:58.644 [2024-11-21 03:32:46.165531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:58.644 [2024-11-21 03:32:46.165538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:58.644 [2024-11-21 03:32:46.165546] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:58.644 [2024-11-21 03:32:46.165558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:58.644 [2024-11-21 03:32:46.165574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:58.644 [2024-11-21 03:32:46.165586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:58.644 [2024-11-21 03:32:46.165594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:58.644 [2024-11-21 03:32:46.165600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:58.644 [2024-11-21 03:32:46.165608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:58.644 [2024-11-21 03:32:46.165615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:58.644 [2024-11-21 03:32:46.165623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:58.644 [2024-11-21 03:32:46.165630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:58.644 [2024-11-21 03:32:46.165637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:58.644 [2024-11-21 03:32:46.165674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:58.644 [2024-11-21 03:32:46.165683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.644 [2024-11-21 03:32:46.165700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:58.644 [2024-11-21 03:32:46.165709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:58.644 [2024-11-21 03:32:46.165716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:58.644 [2024-11-21 03:32:46.165725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.165737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:58.644 [2024-11-21 03:32:46.165745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:23:58.644 [2024-11-21 03:32:46.165752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.179716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.179912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.644 [2024-11-21 03:32:46.180382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.915 ms 00:23:58.644 [2024-11-21 03:32:46.180435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.180602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.180690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:58.644 [2024-11-21 03:32:46.181184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:58.644 [2024-11-21 03:32:46.181222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.201162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.201340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.644 [2024-11-21 03:32:46.201414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.852 ms 00:23:58.644 [2024-11-21 03:32:46.201444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.201503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.201527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.644 [2024-11-21 03:32:46.201551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.644 [2024-11-21 03:32:46.201575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.202161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.202287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.644 [2024-11-21 03:32:46.202361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:23:58.644 [2024-11-21 03:32:46.202386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.644 [2024-11-21 03:32:46.202566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.644 [2024-11-21 03:32:46.202606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.644 [2024-11-21 03:32:46.202780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:58.644 [2024-11-21 03:32:46.202824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.210450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.210627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.906 [2024-11-21 03:32:46.210693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.575 ms 00:23:58.906 [2024-11-21 03:32:46.210731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.214691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:23:58.906 [2024-11-21 03:32:46.214867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:58.906 [2024-11-21 03:32:46.214988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.215013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:58.906 [2024-11-21 03:32:46.215037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:23:58.906 [2024-11-21 03:32:46.215057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.231067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.231240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:58.906 [2024-11-21 03:32:46.231300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.959 ms 00:23:58.906 [2024-11-21 03:32:46.231324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.234312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.234457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:58.906 [2024-11-21 03:32:46.234516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:23:58.906 [2024-11-21 03:32:46.234527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.237203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.237258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:58.906 [2024-11-21 03:32:46.237270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:23:58.906 [2024-11-21 03:32:46.237290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.237637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.237654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:58.906 [2024-11-21 03:32:46.237663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:58.906 [2024-11-21 03:32:46.237671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.261331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.261392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:58.906 [2024-11-21 03:32:46.261404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:23:58.906 [2024-11-21 03:32:46.261412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.269441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:58.906 [2024-11-21 03:32:46.272321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.272480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:58.906 [2024-11-21 03:32:46.272505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.862 ms 00:23:58.906 [2024-11-21 03:32:46.272514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.272587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.272603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:58.906 [2024-11-21 03:32:46.272612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:58.906 [2024-11-21 03:32:46.272620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.273339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.273371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:58.906 [2024-11-21 03:32:46.273386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:23:58.906 [2024-11-21 03:32:46.273395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.273421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.273430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:58.906 [2024-11-21 03:32:46.273439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:58.906 [2024-11-21 03:32:46.273446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.273488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:58.906 [2024-11-21 03:32:46.273499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.273507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:58.906 [2024-11-21 03:32:46.273522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:58.906 [2024-11-21 03:32:46.273530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.278600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.278645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:58.906 [2024-11-21 03:32:46.278656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.050 ms 00:23:58.906 [2024-11-21 03:32:46.278664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.278746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.906 [2024-11-21 03:32:46.278756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:58.906 [2024-11-21 03:32:46.278766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:58.906 [2024-11-21 03:32:46.278777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.906 [2024-11-21 03:32:46.280118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.347 ms, result 0 00:24:00.293  [2024-11-21T03:32:48.803Z] Copying: 1064/1048576 [kB] (1064 kBps) [2024-11-21T03:32:49.755Z] Copying: 2152/1048576 [kB] (1088 kBps) [2024-11-21T03:32:50.699Z] Copying: 4952/1048576 [kB] (2800 kBps) [2024-11-21T03:32:51.643Z] Copying: 20/1024 [MB] (15 MBps) [2024-11-21T03:32:52.588Z] Copying: 37/1024 [MB] (17 MBps) [2024-11-21T03:32:53.532Z] Copying: 54/1024 [MB] (16 MBps) [2024-11-21T03:32:54.477Z] Copying: 82/1024 [MB] (27 MBps) [2024-11-21T03:32:55.864Z] Copying: 102/1024 [MB] (20 MBps) [2024-11-21T03:32:56.808Z] Copying: 124/1024 [MB] (21 MBps) [2024-11-21T03:32:57.751Z] Copying: 149/1024 [MB] (24 MBps) [2024-11-21T03:32:58.695Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-21T03:32:59.635Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-21T03:33:00.580Z] Copying: 197/1024 [MB] (16 MBps) [2024-11-21T03:33:01.525Z] Copying: 213/1024 [MB] (16 MBps) [2024-11-21T03:33:02.912Z] Copying: 229/1024 [MB] (15 MBps) [2024-11-21T03:33:03.485Z] Copying: 245/1024 [MB] (16 MBps) [2024-11-21T03:33:04.872Z] Copying: 264/1024 [MB] (18 MBps) [2024-11-21T03:33:05.816Z] Copying: 290/1024 [MB] (25 MBps) [2024-11-21T03:33:06.760Z] Copying: 338/1024 [MB] (48 MBps) [2024-11-21T03:33:07.703Z] Copying: 366/1024 [MB] (28 MBps) [2024-11-21T03:33:08.647Z] Copying: 391/1024 [MB] (24 MBps) [2024-11-21T03:33:09.657Z] Copying: 420/1024 [MB] (29 MBps) [2024-11-21T03:33:10.602Z] Copying: 442/1024 [MB] (22 MBps) [2024-11-21T03:33:11.547Z] Copying: 466/1024 [MB] (23 MBps) [2024-11-21T03:33:12.490Z] Copying: 487/1024 [MB] (21 MBps) [2024-11-21T03:33:13.878Z] Copying: 508/1024 [MB] (20 MBps) [2024-11-21T03:33:14.821Z] Copying: 545/1024 [MB] (37 MBps) [2024-11-21T03:33:15.764Z] Copying: 572/1024 [MB] (27 MBps) [2024-11-21T03:33:16.708Z] Copying: 602/1024 [MB] (29 MBps) [2024-11-21T03:33:17.654Z] Copying: 630/1024 [MB] (27 MBps) [2024-11-21T03:33:18.599Z] Copying: 669/1024 [MB] (39 MBps) [2024-11-21T03:33:19.541Z] Copying: 698/1024 [MB] (28 MBps) [2024-11-21T03:33:20.485Z] Copying: 715/1024 [MB] (17 MBps) [2024-11-21T03:33:21.873Z] Copying: 740/1024 [MB] (24 MBps) [2024-11-21T03:33:22.819Z] Copying: 762/1024 [MB] (21 MBps) [2024-11-21T03:33:23.763Z] Copying: 793/1024 [MB] (30 MBps) [2024-11-21T03:33:24.707Z] Copying: 825/1024 [MB] (32 MBps) [2024-11-21T03:33:25.651Z] Copying: 843/1024 [MB] (17 MBps) [2024-11-21T03:33:26.594Z] Copying: 865/1024 [MB] (22 MBps) [2024-11-21T03:33:27.539Z] Copying: 893/1024 [MB] (27 MBps) [2024-11-21T03:33:28.483Z] Copying: 918/1024 [MB] (24 MBps) [2024-11-21T03:33:29.876Z] Copying: 940/1024 [MB] (22 MBps) [2024-11-21T03:33:30.819Z] Copying: 956/1024 [MB] (16 MBps) [2024-11-21T03:33:31.762Z] Copying: 977/1024 [MB] (20 MBps) [2024-11-21T03:33:32.335Z] Copying: 1005/1024 [MB] (28 MBps) [2024-11-21T03:33:32.909Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-21 03:33:32.673851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.344 [2024-11-21 03:33:32.674286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:45.344 [2024-11-21 03:33:32.674316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:45.344 [2024-11-21 03:33:32.674337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.344 [2024-11-21 03:33:32.674378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:45.345 [2024-11-21 03:33:32.675294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.675406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:45.345 [2024-11-21 03:33:32.675418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:24:45.345 [2024-11-21 03:33:32.675428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.675671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.675683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:45.345 [2024-11-21 03:33:32.675693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:45.345 [2024-11-21 03:33:32.675701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.694441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.694502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:45.345 [2024-11-21 03:33:32.694615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.720 ms 00:24:45.345 [2024-11-21 03:33:32.694625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.701120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.701311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:45.345 [2024-11-21 03:33:32.701333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.462 ms 00:24:45.345 [2024-11-21 03:33:32.701344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.705033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.705087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:45.345 [2024-11-21 03:33:32.705098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:24:45.345 [2024-11-21 03:33:32.705106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.710179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.710234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:45.345 [2024-11-21 03:33:32.710254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:24:45.345 [2024-11-21 03:33:32.710263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.713834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.713886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:45.345 [2024-11-21 03:33:32.713919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:24:45.345 [2024-11-21 03:33:32.713928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.716847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.717074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:45.345 [2024-11-21 03:33:32.717094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:24:45.345 [2024-11-21 03:33:32.717103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.720188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.720367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:45.345 [2024-11-21 03:33:32.720385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:24:45.345 [2024-11-21 03:33:32.720393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.722736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.722795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:45.345 [2024-11-21 03:33:32.722805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:24:45.345 [2024-11-21 03:33:32.722812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.724846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.345 [2024-11-21 03:33:32.724922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:45.345 [2024-11-21 03:33:32.724934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:24:45.345 [2024-11-21 03:33:32.724942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.345 [2024-11-21 03:33:32.724984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:45.345 [2024-11-21 03:33:32.725001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:45.345 [2024-11-21 03:33:32.725012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 2048 / 261120 wr_cnt: 1 state: open 00:24:45.345 [2024-11-21 03:33:32.725021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:45.345 [2024-11-21 03:33:32.725429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:45.346 [2024-11-21 03:33:32.725815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:45.346 [2024-11-21 03:33:32.725827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c25190a4-7a93-467f-bf6a-a9d74520ea9c 00:24:45.346 [2024-11-21 03:33:32.725840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 263168 00:24:45.346 [2024-11-21 03:33:32.725848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 264384 00:24:45.346 [2024-11-21 03:33:32.725862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 262400 00:24:45.346 [2024-11-21 03:33:32.725871] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:24:45.346 [2024-11-21 03:33:32.725880] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:45.346 [2024-11-21 03:33:32.725888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:45.346 [2024-11-21 03:33:32.725910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:45.346 [2024-11-21 03:33:32.725917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:45.346 [2024-11-21 03:33:32.725924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:45.346 [2024-11-21 03:33:32.725932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.346 [2024-11-21 03:33:32.725944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:45.346 [2024-11-21 03:33:32.725953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:24:45.346 [2024-11-21 03:33:32.725961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.728461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.346 [2024-11-21 03:33:32.728496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:45.346 [2024-11-21 03:33:32.728507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.469 ms 00:24:45.346 [2024-11-21 03:33:32.728525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.728658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.346 [2024-11-21 03:33:32.728668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:45.346 [2024-11-21 03:33:32.728679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:45.346 [2024-11-21 03:33:32.728687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.736484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.736540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.346 [2024-11-21 03:33:32.736551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.736559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.736618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.736627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.346 [2024-11-21 03:33:32.736642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.736651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.736718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.736735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.346 [2024-11-21 03:33:32.736743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.736754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.736770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.736779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.346 [2024-11-21 03:33:32.736787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.736797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.750126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.750176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:45.346 [2024-11-21 03:33:32.750187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.750196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.760398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.760444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.346 [2024-11-21 03:33:32.760461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.760469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.346 [2024-11-21 03:33:32.760519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.346 [2024-11-21 03:33:32.760529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.346 [2024-11-21 03:33:32.760538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.346 [2024-11-21 03:33:32.760546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.760582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.347 [2024-11-21 03:33:32.760591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.347 [2024-11-21 03:33:32.760599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.347 [2024-11-21 03:33:32.760608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.760683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.347 [2024-11-21 03:33:32.760694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.347 [2024-11-21 03:33:32.760709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.347 [2024-11-21 03:33:32.760717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.760745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.347 [2024-11-21 03:33:32.760758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.347 [2024-11-21 03:33:32.760767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.347 [2024-11-21 03:33:32.760775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.760814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.347 [2024-11-21 03:33:32.760827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.347 [2024-11-21 03:33:32.760837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.347 [2024-11-21 03:33:32.760844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.760892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.347 [2024-11-21 03:33:32.760930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.347 [2024-11-21 03:33:32.760938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.347 [2024-11-21 03:33:32.760946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.347 [2024-11-21 03:33:32.761078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.204 ms, result 0 00:24:45.607 00:24:45.607 00:24:45.607 03:33:32 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:47.524 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:47.524 03:33:35 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:47.524 03:33:35 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:47.524 03:33:35 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.786 Process with pid 89858 is not found 00:24:47.786 Remove shared memory files 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89858 00:24:47.786 03:33:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89858 ']' 00:24:47.786 03:33:35 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89858 00:24:47.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89858) - No such process 00:24:47.786 03:33:35 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 89858 is not found' 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:47.786 03:33:35 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:47.786 ************************************ 00:24:47.786 END TEST ftl_restore 00:24:47.786 ************************************ 00:24:47.786 00:24:47.786 real 4m13.428s 00:24:47.786 user 4m0.388s 00:24:47.786 sys 0m12.811s 00:24:47.786 03:33:35 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.786 03:33:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:47.786 03:33:35 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:47.786 03:33:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:47.786 03:33:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.786 03:33:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:47.786 ************************************ 00:24:47.786 START TEST ftl_dirty_shutdown 00:24:47.786 ************************************ 00:24:47.786 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:47.786 * Looking for test storage... 00:24:47.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:47.786 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.786 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.786 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.048 --rc genhtml_branch_coverage=1 00:24:48.048 --rc genhtml_function_coverage=1 00:24:48.048 --rc genhtml_legend=1 00:24:48.048 --rc geninfo_all_blocks=1 00:24:48.048 --rc geninfo_unexecuted_blocks=1 00:24:48.048 00:24:48.048 ' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.048 --rc genhtml_branch_coverage=1 00:24:48.048 --rc genhtml_function_coverage=1 00:24:48.048 --rc genhtml_legend=1 00:24:48.048 --rc geninfo_all_blocks=1 00:24:48.048 --rc geninfo_unexecuted_blocks=1 00:24:48.048 00:24:48.048 ' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.048 --rc genhtml_branch_coverage=1 00:24:48.048 --rc genhtml_function_coverage=1 00:24:48.048 --rc genhtml_legend=1 00:24:48.048 --rc geninfo_all_blocks=1 00:24:48.048 --rc geninfo_unexecuted_blocks=1 00:24:48.048 00:24:48.048 ' 00:24:48.048 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.049 --rc genhtml_branch_coverage=1 00:24:48.049 --rc genhtml_function_coverage=1 00:24:48.049 --rc genhtml_legend=1 00:24:48.049 --rc geninfo_all_blocks=1 00:24:48.049 --rc geninfo_unexecuted_blocks=1 00:24:48.049 00:24:48.049 ' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92554 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92554 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92554 ']' 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.049 03:33:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:48.049 [2024-11-21 03:33:35.531456] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:24:48.049 [2024-11-21 03:33:35.531765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92554 ] 00:24:48.310 [2024-11-21 03:33:35.669190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.310 [2024-11-21 03:33:35.696175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.310 [2024-11-21 03:33:35.725039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:48.884 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:49.145 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.407 { 00:24:49.407 "name": "nvme0n1", 00:24:49.407 "aliases": [ 00:24:49.407 "52562bb1-34b0-4e10-93f0-14195189a285" 00:24:49.407 ], 00:24:49.407 "product_name": "NVMe disk", 00:24:49.407 "block_size": 4096, 00:24:49.407 "num_blocks": 1310720, 00:24:49.407 "uuid": "52562bb1-34b0-4e10-93f0-14195189a285", 00:24:49.407 "numa_id": -1, 00:24:49.407 "assigned_rate_limits": { 00:24:49.407 "rw_ios_per_sec": 0, 00:24:49.407 "rw_mbytes_per_sec": 0, 00:24:49.407 "r_mbytes_per_sec": 0, 00:24:49.407 "w_mbytes_per_sec": 0 00:24:49.407 }, 00:24:49.407 "claimed": true, 00:24:49.407 "claim_type": "read_many_write_one", 00:24:49.407 "zoned": false, 00:24:49.407 "supported_io_types": { 00:24:49.407 "read": true, 00:24:49.407 "write": true, 00:24:49.407 "unmap": true, 00:24:49.407 "flush": true, 00:24:49.407 "reset": true, 00:24:49.407 "nvme_admin": true, 00:24:49.407 "nvme_io": true, 00:24:49.407 "nvme_io_md": false, 00:24:49.407 "write_zeroes": true, 00:24:49.407 "zcopy": false, 00:24:49.407 "get_zone_info": false, 00:24:49.407 "zone_management": false, 00:24:49.407 "zone_append": false, 00:24:49.407 "compare": true, 00:24:49.407 "compare_and_write": false, 00:24:49.407 "abort": true, 00:24:49.407 "seek_hole": false, 00:24:49.407 "seek_data": false, 00:24:49.407 "copy": true, 00:24:49.407 "nvme_iov_md": false 00:24:49.407 }, 00:24:49.407 "driver_specific": { 00:24:49.407 "nvme": [ 00:24:49.407 { 00:24:49.407 "pci_address": "0000:00:11.0", 00:24:49.407 "trid": { 00:24:49.407 "trtype": "PCIe", 00:24:49.407 "traddr": "0000:00:11.0" 00:24:49.407 }, 00:24:49.407 "ctrlr_data": { 00:24:49.407 "cntlid": 0, 00:24:49.407 "vendor_id": "0x1b36", 00:24:49.407 "model_number": "QEMU NVMe Ctrl", 00:24:49.407 "serial_number": "12341", 00:24:49.407 "firmware_revision": "8.0.0", 00:24:49.407 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:49.407 "oacs": { 00:24:49.407 "security": 0, 00:24:49.407 "format": 1, 00:24:49.407 "firmware": 0, 00:24:49.407 "ns_manage": 1 00:24:49.407 }, 00:24:49.407 "multi_ctrlr": false, 00:24:49.407 "ana_reporting": false 00:24:49.407 }, 00:24:49.407 "vs": { 00:24:49.407 "nvme_version": "1.4" 00:24:49.407 }, 00:24:49.407 "ns_data": { 00:24:49.407 "id": 1, 00:24:49.407 "can_share": false 00:24:49.407 } 00:24:49.407 } 00:24:49.407 ], 00:24:49.407 "mp_policy": "active_passive" 00:24:49.407 } 00:24:49.407 } 00:24:49.407 ]' 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:49.407 03:33:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:49.669 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=08d646c2-03a1-40fd-92f5-58fd1437bd38 00:24:49.669 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:49.669 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08d646c2-03a1-40fd-92f5-58fd1437bd38 00:24:49.930 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:50.211 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0574822c-62dd-4131-8a4c-28dc7e4eca60 00:24:50.211 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0574822c-62dd-4131-8a4c-28dc7e4eca60 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:50.478 03:33:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:50.739 { 00:24:50.739 "name": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:50.739 "aliases": [ 00:24:50.739 "lvs/nvme0n1p0" 00:24:50.739 ], 00:24:50.739 "product_name": "Logical Volume", 00:24:50.739 "block_size": 4096, 00:24:50.739 "num_blocks": 26476544, 00:24:50.739 "uuid": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:50.739 "assigned_rate_limits": { 00:24:50.739 "rw_ios_per_sec": 0, 00:24:50.739 "rw_mbytes_per_sec": 0, 00:24:50.739 "r_mbytes_per_sec": 0, 00:24:50.739 "w_mbytes_per_sec": 0 00:24:50.739 }, 00:24:50.739 "claimed": false, 00:24:50.739 "zoned": false, 00:24:50.739 "supported_io_types": { 00:24:50.739 "read": true, 00:24:50.739 "write": true, 00:24:50.739 "unmap": true, 00:24:50.739 "flush": false, 00:24:50.739 "reset": true, 00:24:50.739 "nvme_admin": false, 00:24:50.739 "nvme_io": false, 00:24:50.739 "nvme_io_md": false, 00:24:50.739 "write_zeroes": true, 00:24:50.739 "zcopy": false, 00:24:50.739 "get_zone_info": false, 00:24:50.739 "zone_management": false, 00:24:50.739 "zone_append": false, 00:24:50.739 "compare": false, 00:24:50.739 "compare_and_write": false, 00:24:50.739 "abort": false, 00:24:50.739 "seek_hole": true, 00:24:50.739 "seek_data": true, 00:24:50.739 "copy": false, 00:24:50.739 "nvme_iov_md": false 00:24:50.739 }, 00:24:50.739 "driver_specific": { 00:24:50.739 "lvol": { 00:24:50.739 "lvol_store_uuid": "0574822c-62dd-4131-8a4c-28dc7e4eca60", 00:24:50.739 "base_bdev": "nvme0n1", 00:24:50.739 "thin_provision": true, 00:24:50.739 "num_allocated_clusters": 0, 00:24:50.739 "snapshot": false, 00:24:50.739 "clone": false, 00:24:50.739 "esnap_clone": false 00:24:50.739 } 00:24:50.739 } 00:24:50.739 } 00:24:50.739 ]' 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:50.739 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:50.999 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:51.258 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.258 { 00:24:51.258 "name": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:51.259 "aliases": [ 00:24:51.259 "lvs/nvme0n1p0" 00:24:51.259 ], 00:24:51.259 "product_name": "Logical Volume", 00:24:51.259 "block_size": 4096, 00:24:51.259 "num_blocks": 26476544, 00:24:51.259 "uuid": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:51.259 "assigned_rate_limits": { 00:24:51.259 "rw_ios_per_sec": 0, 00:24:51.259 "rw_mbytes_per_sec": 0, 00:24:51.259 "r_mbytes_per_sec": 0, 00:24:51.259 "w_mbytes_per_sec": 0 00:24:51.259 }, 00:24:51.259 "claimed": false, 00:24:51.259 "zoned": false, 00:24:51.259 "supported_io_types": { 00:24:51.259 "read": true, 00:24:51.259 "write": true, 00:24:51.259 "unmap": true, 00:24:51.259 "flush": false, 00:24:51.259 "reset": true, 00:24:51.259 "nvme_admin": false, 00:24:51.259 "nvme_io": false, 00:24:51.259 "nvme_io_md": false, 00:24:51.259 "write_zeroes": true, 00:24:51.259 "zcopy": false, 00:24:51.259 "get_zone_info": false, 00:24:51.259 "zone_management": false, 00:24:51.259 "zone_append": false, 00:24:51.259 "compare": false, 00:24:51.259 "compare_and_write": false, 00:24:51.259 "abort": false, 00:24:51.259 "seek_hole": true, 00:24:51.259 "seek_data": true, 00:24:51.259 "copy": false, 00:24:51.259 "nvme_iov_md": false 00:24:51.259 }, 00:24:51.259 "driver_specific": { 00:24:51.259 "lvol": { 00:24:51.259 "lvol_store_uuid": "0574822c-62dd-4131-8a4c-28dc7e4eca60", 00:24:51.259 "base_bdev": "nvme0n1", 00:24:51.259 "thin_provision": true, 00:24:51.259 "num_allocated_clusters": 0, 00:24:51.259 "snapshot": false, 00:24:51.259 "clone": false, 00:24:51.259 "esnap_clone": false 00:24:51.259 } 00:24:51.259 } 00:24:51.259 } 00:24:51.259 ]' 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:51.259 03:33:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:51.518 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:51.519 03:33:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 00:24:51.519 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.519 { 00:24:51.519 "name": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:51.519 "aliases": [ 00:24:51.519 "lvs/nvme0n1p0" 00:24:51.519 ], 00:24:51.519 "product_name": "Logical Volume", 00:24:51.519 "block_size": 4096, 00:24:51.519 "num_blocks": 26476544, 00:24:51.519 "uuid": "2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4", 00:24:51.519 "assigned_rate_limits": { 00:24:51.519 "rw_ios_per_sec": 0, 00:24:51.519 "rw_mbytes_per_sec": 0, 00:24:51.519 "r_mbytes_per_sec": 0, 00:24:51.519 "w_mbytes_per_sec": 0 00:24:51.519 }, 00:24:51.519 "claimed": false, 00:24:51.519 "zoned": false, 00:24:51.519 "supported_io_types": { 00:24:51.519 "read": true, 00:24:51.519 "write": true, 00:24:51.519 "unmap": true, 00:24:51.519 "flush": false, 00:24:51.519 "reset": true, 00:24:51.519 "nvme_admin": false, 00:24:51.519 "nvme_io": false, 00:24:51.519 "nvme_io_md": false, 00:24:51.519 "write_zeroes": true, 00:24:51.519 "zcopy": false, 00:24:51.519 "get_zone_info": false, 00:24:51.519 "zone_management": false, 00:24:51.519 "zone_append": false, 00:24:51.519 "compare": false, 00:24:51.519 "compare_and_write": false, 00:24:51.519 "abort": false, 00:24:51.519 "seek_hole": true, 00:24:51.519 "seek_data": true, 00:24:51.519 "copy": false, 00:24:51.519 "nvme_iov_md": false 00:24:51.519 }, 00:24:51.519 "driver_specific": { 00:24:51.519 "lvol": { 00:24:51.519 "lvol_store_uuid": "0574822c-62dd-4131-8a4c-28dc7e4eca60", 00:24:51.519 "base_bdev": "nvme0n1", 00:24:51.519 "thin_provision": true, 00:24:51.519 "num_allocated_clusters": 0, 00:24:51.519 "snapshot": false, 00:24:51.519 "clone": false, 00:24:51.519 "esnap_clone": false 00:24:51.519 } 00:24:51.519 } 00:24:51.519 } 00:24:51.519 ]' 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 --l2p_dram_limit 10' 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:51.801 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:51.802 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:51.802 03:33:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2232d8aa-62c7-471f-bb57-6b5ff5aa8ad4 --l2p_dram_limit 10 -c nvc0n1p0 00:24:51.802 [2024-11-21 03:33:39.323199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.802 [2024-11-21 03:33:39.323237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.802 [2024-11-21 03:33:39.323250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.802 [2024-11-21 03:33:39.323257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.802 [2024-11-21 03:33:39.323301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.803 [2024-11-21 03:33:39.323309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.803 [2024-11-21 03:33:39.323320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:51.803 [2024-11-21 03:33:39.323326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.803 [2024-11-21 03:33:39.323345] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.803 [2024-11-21 03:33:39.323545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.803 [2024-11-21 03:33:39.323558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.803 [2024-11-21 03:33:39.323565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.803 [2024-11-21 03:33:39.323573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:24:51.803 [2024-11-21 03:33:39.323578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.803 [2024-11-21 03:33:39.323603] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b77f572-ca96-4772-ae1d-10eeb1a8dd34 00:24:51.803 [2024-11-21 03:33:39.324651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.803 [2024-11-21 03:33:39.324682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:51.803 [2024-11-21 03:33:39.324690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:51.803 [2024-11-21 03:33:39.324700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.803 [2024-11-21 03:33:39.329440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.804 [2024-11-21 03:33:39.329471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.804 [2024-11-21 03:33:39.329482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:24:51.804 [2024-11-21 03:33:39.329491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.804 [2024-11-21 03:33:39.329562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.804 [2024-11-21 03:33:39.329570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.804 [2024-11-21 03:33:39.329577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:51.804 [2024-11-21 03:33:39.329583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.804 [2024-11-21 03:33:39.329624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.804 [2024-11-21 03:33:39.329634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.804 [2024-11-21 03:33:39.329641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.804 [2024-11-21 03:33:39.329648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.804 [2024-11-21 03:33:39.329665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.804 [2024-11-21 03:33:39.330968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.804 [2024-11-21 03:33:39.330991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.804 [2024-11-21 03:33:39.331001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:24:51.804 [2024-11-21 03:33:39.331008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.804 [2024-11-21 03:33:39.331037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.804 [2024-11-21 03:33:39.331044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.804 [2024-11-21 03:33:39.331053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.805 [2024-11-21 03:33:39.331059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.805 [2024-11-21 03:33:39.331074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:51.806 [2024-11-21 03:33:39.331181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.806 [2024-11-21 03:33:39.331191] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.806 [2024-11-21 03:33:39.331202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.806 [2024-11-21 03:33:39.331212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.806 [2024-11-21 03:33:39.331221] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.806 [2024-11-21 03:33:39.331233] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.806 [2024-11-21 03:33:39.331240] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.806 [2024-11-21 03:33:39.331246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.806 [2024-11-21 03:33:39.331251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.806 [2024-11-21 03:33:39.331259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.806 [2024-11-21 03:33:39.331265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.806 [2024-11-21 03:33:39.331272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:24:51.806 [2024-11-21 03:33:39.331278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.806 [2024-11-21 03:33:39.331342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.806 [2024-11-21 03:33:39.331348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.806 [2024-11-21 03:33:39.331355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:51.806 [2024-11-21 03:33:39.331361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.806 [2024-11-21 03:33:39.331434] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.806 [2024-11-21 03:33:39.331443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.806 [2024-11-21 03:33:39.331451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.806 [2024-11-21 03:33:39.331456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.806 [2024-11-21 03:33:39.331464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.806 [2024-11-21 03:33:39.331469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.806 [2024-11-21 03:33:39.331475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.806 [2024-11-21 03:33:39.331480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.807 [2024-11-21 03:33:39.331486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.807 [2024-11-21 03:33:39.331498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.807 [2024-11-21 03:33:39.331503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.807 [2024-11-21 03:33:39.331511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.807 [2024-11-21 03:33:39.331515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.807 [2024-11-21 03:33:39.331522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.807 [2024-11-21 03:33:39.331527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.807 [2024-11-21 03:33:39.331538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.807 [2024-11-21 03:33:39.331544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.807 [2024-11-21 03:33:39.331555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.807 [2024-11-21 03:33:39.331566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.807 [2024-11-21 03:33:39.331572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.807 [2024-11-21 03:33:39.331584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.807 [2024-11-21 03:33:39.331589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.807 [2024-11-21 03:33:39.331602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.807 [2024-11-21 03:33:39.331607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.807 [2024-11-21 03:33:39.331613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.807 [2024-11-21 03:33:39.331618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.807 [2024-11-21 03:33:39.331624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.808 [2024-11-21 03:33:39.331629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.808 [2024-11-21 03:33:39.331635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.808 [2024-11-21 03:33:39.331640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.808 [2024-11-21 03:33:39.331646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.808 [2024-11-21 03:33:39.331651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.808 [2024-11-21 03:33:39.331657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.808 [2024-11-21 03:33:39.331662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.808 [2024-11-21 03:33:39.331668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.808 [2024-11-21 03:33:39.331673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.808 [2024-11-21 03:33:39.331679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.808 [2024-11-21 03:33:39.331683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.808 [2024-11-21 03:33:39.331692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.808 [2024-11-21 03:33:39.331697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.808 [2024-11-21 03:33:39.331704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.808 [2024-11-21 03:33:39.331711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.809 [2024-11-21 03:33:39.331717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.809 [2024-11-21 03:33:39.331722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.809 [2024-11-21 03:33:39.331728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.809 [2024-11-21 03:33:39.331732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.809 [2024-11-21 03:33:39.331738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.810 [2024-11-21 03:33:39.331746] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.810 [2024-11-21 03:33:39.331757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.810 [2024-11-21 03:33:39.331771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.810 [2024-11-21 03:33:39.331777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.810 [2024-11-21 03:33:39.331784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.810 [2024-11-21 03:33:39.331789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.810 [2024-11-21 03:33:39.331796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.810 [2024-11-21 03:33:39.331802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.810 [2024-11-21 03:33:39.331808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.810 [2024-11-21 03:33:39.331813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.810 [2024-11-21 03:33:39.331820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.810 [2024-11-21 03:33:39.331848] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.810 [2024-11-21 03:33:39.331856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.810 [2024-11-21 03:33:39.331868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.810 [2024-11-21 03:33:39.331873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.810 [2024-11-21 03:33:39.331880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.810 [2024-11-21 03:33:39.331886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.810 [2024-11-21 03:33:39.331894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.810 [2024-11-21 03:33:39.331910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:24:51.810 [2024-11-21 03:33:39.331918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.810 [2024-11-21 03:33:39.331946] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:51.810 [2024-11-21 03:33:39.331955] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:55.113 [2024-11-21 03:33:42.546181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.546429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:55.113 [2024-11-21 03:33:42.546513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3214.219 ms 00:24:55.113 [2024-11-21 03:33:42.546544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.558182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.558364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.113 [2024-11-21 03:33:42.558435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.455 ms 00:24:55.113 [2024-11-21 03:33:42.558466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.558585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.558653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:55.113 [2024-11-21 03:33:42.558679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:55.113 [2024-11-21 03:33:42.558706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.569660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.569821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.113 [2024-11-21 03:33:42.569888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:24:55.113 [2024-11-21 03:33:42.569934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.569978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.570002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.113 [2024-11-21 03:33:42.570022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:55.113 [2024-11-21 03:33:42.570090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.570607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.113 [2024-11-21 03:33:42.570675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.113 [2024-11-21 03:33:42.570755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:24:55.113 [2024-11-21 03:33:42.570820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.113 [2024-11-21 03:33:42.570964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.114 [2024-11-21 03:33:42.571034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.114 [2024-11-21 03:33:42.571061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:55.114 [2024-11-21 03:33:42.571084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.114 [2024-11-21 03:33:42.578269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.114 [2024-11-21 03:33:42.578415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.114 [2024-11-21 03:33:42.578513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.150 ms 00:24:55.114 [2024-11-21 03:33:42.578538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.114 [2024-11-21 03:33:42.588022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:55.114 [2024-11-21 03:33:42.591663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.114 [2024-11-21 03:33:42.591797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:55.114 [2024-11-21 03:33:42.591853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:24:55.114 [2024-11-21 03:33:42.591876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.682065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.682233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:55.376 [2024-11-21 03:33:42.682303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.123 ms 00:24:55.376 [2024-11-21 03:33:42.682352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.682574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.682612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:55.376 [2024-11-21 03:33:42.682671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:24:55.376 [2024-11-21 03:33:42.682695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.688575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.688746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:55.376 [2024-11-21 03:33:42.688816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.812 ms 00:24:55.376 [2024-11-21 03:33:42.688840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.694134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.694295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:55.376 [2024-11-21 03:33:42.694370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:24:55.376 [2024-11-21 03:33:42.694390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.694730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.694858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:55.376 [2024-11-21 03:33:42.694954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:24:55.376 [2024-11-21 03:33:42.694983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.740574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.740758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:55.376 [2024-11-21 03:33:42.740828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.544 ms 00:24:55.376 [2024-11-21 03:33:42.740852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.748031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.748199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:55.376 [2024-11-21 03:33:42.748261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.085 ms 00:24:55.376 [2024-11-21 03:33:42.748285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.754160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.754324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:55.376 [2024-11-21 03:33:42.754346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:24:55.376 [2024-11-21 03:33:42.754353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.760526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.760581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:55.376 [2024-11-21 03:33:42.760598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:24:55.376 [2024-11-21 03:33:42.760605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.760641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.760650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:55.376 [2024-11-21 03:33:42.760662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:55.376 [2024-11-21 03:33:42.760670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.760772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.376 [2024-11-21 03:33:42.760783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:55.376 [2024-11-21 03:33:42.760805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:55.376 [2024-11-21 03:33:42.760812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.376 [2024-11-21 03:33:42.761968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3438.264 ms, result 0 00:24:55.376 { 00:24:55.376 "name": "ftl0", 00:24:55.376 "uuid": "1b77f572-ca96-4772-ae1d-10eeb1a8dd34" 00:24:55.376 } 00:24:55.376 03:33:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:55.376 03:33:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:55.637 03:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:55.637 03:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:55.637 03:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:55.897 /dev/nbd0 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:55.897 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:55.898 1+0 records in 00:24:55.898 1+0 records out 00:24:55.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523256 s, 7.8 MB/s 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:55.898 03:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:55.898 [2024-11-21 03:33:43.331513] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:24:55.898 [2024-11-21 03:33:43.331660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92697 ] 00:24:56.159 [2024-11-21 03:33:43.467000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:56.159 [2024-11-21 03:33:43.497280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.159 [2024-11-21 03:33:43.525761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.101  [2024-11-21T03:33:45.607Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-21T03:33:46.994Z] Copying: 382/1024 [MB] (193 MBps) [2024-11-21T03:33:47.937Z] Copying: 577/1024 [MB] (194 MBps) [2024-11-21T03:33:48.878Z] Copying: 766/1024 [MB] (189 MBps) [2024-11-21T03:33:49.138Z] Copying: 952/1024 [MB] (185 MBps) [2024-11-21T03:33:49.138Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:25:01.573 00:25:01.573 03:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:04.111 03:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:04.111 [2024-11-21 03:33:51.293755] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:25:04.111 [2024-11-21 03:33:51.293882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92783 ] 00:25:04.111 [2024-11-21 03:33:51.426598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:04.111 [2024-11-21 03:33:51.451143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.111 [2024-11-21 03:33:51.474151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.045  [2024-11-21T03:33:53.544Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-21T03:33:54.917Z] Copying: 55/1024 [MB] (32 MBps) [2024-11-21T03:33:55.851Z] Copying: 77/1024 [MB] (21 MBps) [2024-11-21T03:33:56.784Z] Copying: 98/1024 [MB] (20 MBps) [2024-11-21T03:33:57.716Z] Copying: 118/1024 [MB] (20 MBps) [2024-11-21T03:33:58.649Z] Copying: 139/1024 [MB] (20 MBps) [2024-11-21T03:33:59.582Z] Copying: 158/1024 [MB] (18 MBps) [2024-11-21T03:34:00.956Z] Copying: 179/1024 [MB] (21 MBps) [2024-11-21T03:34:01.891Z] Copying: 198/1024 [MB] (19 MBps) [2024-11-21T03:34:02.824Z] Copying: 217/1024 [MB] (18 MBps) [2024-11-21T03:34:03.757Z] Copying: 233/1024 [MB] (16 MBps) [2024-11-21T03:34:04.692Z] Copying: 250/1024 [MB] (16 MBps) [2024-11-21T03:34:05.626Z] Copying: 263/1024 [MB] (13 MBps) [2024-11-21T03:34:06.561Z] Copying: 276/1024 [MB] (12 MBps) [2024-11-21T03:34:07.979Z] Copying: 294/1024 [MB] (18 MBps) [2024-11-21T03:34:08.546Z] Copying: 308/1024 [MB] (13 MBps) [2024-11-21T03:34:09.920Z] Copying: 326/1024 [MB] (17 MBps) [2024-11-21T03:34:10.855Z] Copying: 344/1024 [MB] (18 MBps) [2024-11-21T03:34:11.790Z] Copying: 358/1024 [MB] (13 MBps) [2024-11-21T03:34:12.724Z] Copying: 372/1024 [MB] (13 MBps) [2024-11-21T03:34:13.658Z] Copying: 386/1024 [MB] (14 MBps) [2024-11-21T03:34:14.592Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-21T03:34:15.969Z] Copying: 431/1024 [MB] (27 MBps) [2024-11-21T03:34:16.540Z] Copying: 449/1024 [MB] (17 MBps) [2024-11-21T03:34:17.926Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-21T03:34:18.869Z] Copying: 483/1024 [MB] (15 MBps) [2024-11-21T03:34:19.809Z] Copying: 498/1024 [MB] (15 MBps) [2024-11-21T03:34:20.745Z] Copying: 516/1024 [MB] (17 MBps) [2024-11-21T03:34:21.679Z] Copying: 533/1024 [MB] (17 MBps) [2024-11-21T03:34:22.613Z] Copying: 550/1024 [MB] (17 MBps) [2024-11-21T03:34:23.547Z] Copying: 572/1024 [MB] (21 MBps) [2024-11-21T03:34:24.920Z] Copying: 592/1024 [MB] (19 MBps) [2024-11-21T03:34:25.854Z] Copying: 613/1024 [MB] (21 MBps) [2024-11-21T03:34:26.788Z] Copying: 632/1024 [MB] (18 MBps) [2024-11-21T03:34:27.722Z] Copying: 657/1024 [MB] (25 MBps) [2024-11-21T03:34:28.656Z] Copying: 679/1024 [MB] (22 MBps) [2024-11-21T03:34:29.631Z] Copying: 700/1024 [MB] (21 MBps) [2024-11-21T03:34:30.564Z] Copying: 722/1024 [MB] (21 MBps) [2024-11-21T03:34:31.936Z] Copying: 741/1024 [MB] (19 MBps) [2024-11-21T03:34:32.871Z] Copying: 762/1024 [MB] (20 MBps) [2024-11-21T03:34:33.806Z] Copying: 779/1024 [MB] (16 MBps) [2024-11-21T03:34:34.740Z] Copying: 802/1024 [MB] (23 MBps) [2024-11-21T03:34:35.673Z] Copying: 826/1024 [MB] (24 MBps) [2024-11-21T03:34:36.660Z] Copying: 849/1024 [MB] (22 MBps) [2024-11-21T03:34:37.607Z] Copying: 869/1024 [MB] (20 MBps) [2024-11-21T03:34:38.541Z] Copying: 889/1024 [MB] (19 MBps) [2024-11-21T03:34:39.914Z] Copying: 912/1024 [MB] (23 MBps) [2024-11-21T03:34:40.848Z] Copying: 935/1024 [MB] (22 MBps) [2024-11-21T03:34:41.784Z] Copying: 959/1024 [MB] (24 MBps) [2024-11-21T03:34:42.721Z] Copying: 980/1024 [MB] (20 MBps) [2024-11-21T03:34:43.657Z] Copying: 1002/1024 [MB] (22 MBps) [2024-11-21T03:34:43.657Z] Copying: 1022/1024 [MB] (20 MBps) [2024-11-21T03:34:43.917Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:25:56.352 00:25:56.352 03:34:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:56.352 03:34:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:56.615 03:34:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:56.615 [2024-11-21 03:34:44.110033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.110081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.615 [2024-11-21 03:34:44.110105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.615 [2024-11-21 03:34:44.110116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.110138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.615 [2024-11-21 03:34:44.110571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.110586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.615 [2024-11-21 03:34:44.110596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:25:56.615 [2024-11-21 03:34:44.110604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.113215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.113247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.615 [2024-11-21 03:34:44.113258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:25:56.615 [2024-11-21 03:34:44.113270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.129102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.129136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.615 [2024-11-21 03:34:44.129148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.812 ms 00:25:56.615 [2024-11-21 03:34:44.129155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.135481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.135505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.615 [2024-11-21 03:34:44.135516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:25:56.615 [2024-11-21 03:34:44.135524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.137348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.137378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.615 [2024-11-21 03:34:44.137389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:25:56.615 [2024-11-21 03:34:44.137396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.142062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.142197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.615 [2024-11-21 03:34:44.142216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:25:56.615 [2024-11-21 03:34:44.142224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.142342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.142351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.615 [2024-11-21 03:34:44.142361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:56.615 [2024-11-21 03:34:44.142368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.144964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.144992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:56.615 [2024-11-21 03:34:44.145002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:25:56.615 [2024-11-21 03:34:44.145008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.147281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.147310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:56.615 [2024-11-21 03:34:44.147320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:25:56.615 [2024-11-21 03:34:44.147326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.149121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.149225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:56.615 [2024-11-21 03:34:44.149242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:25:56.615 [2024-11-21 03:34:44.149248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.152007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.615 [2024-11-21 03:34:44.152101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:56.615 [2024-11-21 03:34:44.152135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:25:56.615 [2024-11-21 03:34:44.152156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.615 [2024-11-21 03:34:44.152243] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:56.615 [2024-11-21 03:34:44.152284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:56.615 [2024-11-21 03:34:44.152588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.152889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.153982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.154991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:56.616 [2024-11-21 03:34:44.155789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:56.616 [2024-11-21 03:34:44.155815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b77f572-ca96-4772-ae1d-10eeb1a8dd34 00:25:56.616 [2024-11-21 03:34:44.155836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:56.617 [2024-11-21 03:34:44.155859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:56.617 [2024-11-21 03:34:44.155887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:56.617 [2024-11-21 03:34:44.155943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:56.617 [2024-11-21 03:34:44.155962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:56.617 [2024-11-21 03:34:44.155993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:56.617 [2024-11-21 03:34:44.156013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:56.617 [2024-11-21 03:34:44.156034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:56.617 [2024-11-21 03:34:44.156053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:56.617 [2024-11-21 03:34:44.156078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.617 [2024-11-21 03:34:44.156098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:56.617 [2024-11-21 03:34:44.156130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.839 ms 00:25:56.617 [2024-11-21 03:34:44.156151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.158697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.617 [2024-11-21 03:34:44.158767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:56.617 [2024-11-21 03:34:44.158795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:25:56.617 [2024-11-21 03:34:44.158815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.159021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.617 [2024-11-21 03:34:44.159053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:56.617 [2024-11-21 03:34:44.159079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:56.617 [2024-11-21 03:34:44.159099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.165763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.617 [2024-11-21 03:34:44.165793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.617 [2024-11-21 03:34:44.165804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.617 [2024-11-21 03:34:44.165811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.165862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.617 [2024-11-21 03:34:44.165873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.617 [2024-11-21 03:34:44.165882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.617 [2024-11-21 03:34:44.165893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.165983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.617 [2024-11-21 03:34:44.165993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.617 [2024-11-21 03:34:44.166005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.617 [2024-11-21 03:34:44.166012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.166030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.617 [2024-11-21 03:34:44.166038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.617 [2024-11-21 03:34:44.166048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.617 [2024-11-21 03:34:44.166055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.617 [2024-11-21 03:34:44.174880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.617 [2024-11-21 03:34:44.174930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.617 [2024-11-21 03:34:44.174942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.617 [2024-11-21 03:34:44.174950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.879 [2024-11-21 03:34:44.182412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.879 [2024-11-21 03:34:44.182485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.879 [2024-11-21 03:34:44.182565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.879 [2024-11-21 03:34:44.182660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.879 [2024-11-21 03:34:44.182717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.879 [2024-11-21 03:34:44.182788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.182843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.879 [2024-11-21 03:34:44.182853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.879 [2024-11-21 03:34:44.182862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.879 [2024-11-21 03:34:44.182871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.879 [2024-11-21 03:34:44.183035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.950 ms, result 0 00:25:56.879 true 00:25:56.879 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92554 00:25:56.879 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92554 00:25:56.879 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:56.879 [2024-11-21 03:34:44.276470] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:25:56.879 [2024-11-21 03:34:44.276585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93340 ] 00:25:56.879 [2024-11-21 03:34:44.409412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:56.879 [2024-11-21 03:34:44.440271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.140 [2024-11-21 03:34:44.460412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.083  [2024-11-21T03:34:46.593Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-21T03:34:47.536Z] Copying: 379/1024 [MB] (190 MBps) [2024-11-21T03:34:48.923Z] Copying: 640/1024 [MB] (260 MBps) [2024-11-21T03:34:49.184Z] Copying: 900/1024 [MB] (260 MBps) [2024-11-21T03:34:49.184Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:26:01.619 00:26:01.619 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92554 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:01.619 03:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:01.878 [2024-11-21 03:34:49.203614] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:26:01.878 [2024-11-21 03:34:49.203723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93393 ] 00:26:01.878 [2024-11-21 03:34:49.335713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:01.878 [2024-11-21 03:34:49.363205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.878 [2024-11-21 03:34:49.381461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.139 [2024-11-21 03:34:49.462503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.139 [2024-11-21 03:34:49.462551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.139 [2024-11-21 03:34:49.524115] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:02.139 [2024-11-21 03:34:49.524402] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:02.139 [2024-11-21 03:34:49.525059] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:02.403 [2024-11-21 03:34:49.777624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.777652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:02.403 [2024-11-21 03:34:49.777662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:02.403 [2024-11-21 03:34:49.777668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.777702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.777710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.403 [2024-11-21 03:34:49.777716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:02.403 [2024-11-21 03:34:49.777721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.777740] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:02.403 [2024-11-21 03:34:49.777927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:02.403 [2024-11-21 03:34:49.777938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.777944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.403 [2024-11-21 03:34:49.777950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:02.403 [2024-11-21 03:34:49.777955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.778843] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:02.403 [2024-11-21 03:34:49.780819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.780843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:02.403 [2024-11-21 03:34:49.780855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:26:02.403 [2024-11-21 03:34:49.780863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.780919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.780927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:02.403 [2024-11-21 03:34:49.780937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:02.403 [2024-11-21 03:34:49.780942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.785203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.785224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.403 [2024-11-21 03:34:49.785232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.231 ms 00:26:02.403 [2024-11-21 03:34:49.785237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.785299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.785310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.403 [2024-11-21 03:34:49.785317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:02.403 [2024-11-21 03:34:49.785325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.785359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.785366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:02.403 [2024-11-21 03:34:49.785372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:02.403 [2024-11-21 03:34:49.785377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.785392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:02.403 [2024-11-21 03:34:49.786517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.786536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.403 [2024-11-21 03:34:49.786543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:26:02.403 [2024-11-21 03:34:49.786552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.786578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.786587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:02.403 [2024-11-21 03:34:49.786593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:02.403 [2024-11-21 03:34:49.786602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.786615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:02.403 [2024-11-21 03:34:49.786632] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:02.403 [2024-11-21 03:34:49.786662] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:02.403 [2024-11-21 03:34:49.786677] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:02.403 [2024-11-21 03:34:49.786757] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:02.403 [2024-11-21 03:34:49.786765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:02.403 [2024-11-21 03:34:49.786773] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:02.403 [2024-11-21 03:34:49.786780] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:02.403 [2024-11-21 03:34:49.786786] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:02.403 [2024-11-21 03:34:49.786792] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:02.403 [2024-11-21 03:34:49.786797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:02.403 [2024-11-21 03:34:49.786802] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:02.403 [2024-11-21 03:34:49.786807] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:02.403 [2024-11-21 03:34:49.786816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.786821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:02.403 [2024-11-21 03:34:49.786827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:26:02.403 [2024-11-21 03:34:49.786832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.786910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.403 [2024-11-21 03:34:49.786916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:02.403 [2024-11-21 03:34:49.786921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:02.403 [2024-11-21 03:34:49.786927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.403 [2024-11-21 03:34:49.786998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:02.404 [2024-11-21 03:34:49.787008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:02.404 [2024-11-21 03:34:49.787014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:02.404 [2024-11-21 03:34:49.787036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:02.404 [2024-11-21 03:34:49.787058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.404 [2024-11-21 03:34:49.787068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:02.404 [2024-11-21 03:34:49.787073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:02.404 [2024-11-21 03:34:49.787078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.404 [2024-11-21 03:34:49.787083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:02.404 [2024-11-21 03:34:49.787088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:02.404 [2024-11-21 03:34:49.787093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:02.404 [2024-11-21 03:34:49.787103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:02.404 [2024-11-21 03:34:49.787117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:02.404 [2024-11-21 03:34:49.787132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:02.404 [2024-11-21 03:34:49.787150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:02.404 [2024-11-21 03:34:49.787164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:02.404 [2024-11-21 03:34:49.787180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.404 [2024-11-21 03:34:49.787192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:02.404 [2024-11-21 03:34:49.787198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:02.404 [2024-11-21 03:34:49.787204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.404 [2024-11-21 03:34:49.787210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:02.404 [2024-11-21 03:34:49.787215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:02.404 [2024-11-21 03:34:49.787221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:02.404 [2024-11-21 03:34:49.787234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:02.404 [2024-11-21 03:34:49.787240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787246] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:02.404 [2024-11-21 03:34:49.787252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:02.404 [2024-11-21 03:34:49.787258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.404 [2024-11-21 03:34:49.787270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:02.404 [2024-11-21 03:34:49.787276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:02.404 [2024-11-21 03:34:49.787281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:02.404 [2024-11-21 03:34:49.787287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:02.404 [2024-11-21 03:34:49.787292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:02.404 [2024-11-21 03:34:49.787298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:02.404 [2024-11-21 03:34:49.787305] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:02.404 [2024-11-21 03:34:49.787312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:02.404 [2024-11-21 03:34:49.787327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:02.404 [2024-11-21 03:34:49.787333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:02.404 [2024-11-21 03:34:49.787339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:02.404 [2024-11-21 03:34:49.787345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:02.404 [2024-11-21 03:34:49.787351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:02.404 [2024-11-21 03:34:49.787357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:02.404 [2024-11-21 03:34:49.787363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:02.404 [2024-11-21 03:34:49.787369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:02.404 [2024-11-21 03:34:49.787375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:02.404 [2024-11-21 03:34:49.787407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:02.404 [2024-11-21 03:34:49.787415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:02.404 [2024-11-21 03:34:49.787431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:02.404 [2024-11-21 03:34:49.787438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:02.404 [2024-11-21 03:34:49.787444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:02.404 [2024-11-21 03:34:49.787450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.787456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:02.404 [2024-11-21 03:34:49.787462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:26:02.404 [2024-11-21 03:34:49.787468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.795317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.795339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.404 [2024-11-21 03:34:49.795347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.816 ms 00:26:02.404 [2024-11-21 03:34:49.795354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.795418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.795426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:02.404 [2024-11-21 03:34:49.795432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:02.404 [2024-11-21 03:34:49.795438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.814793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.814832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.404 [2024-11-21 03:34:49.814845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.321 ms 00:26:02.404 [2024-11-21 03:34:49.814861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.814931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.814951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.404 [2024-11-21 03:34:49.814960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:02.404 [2024-11-21 03:34:49.814969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.815319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.815347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.404 [2024-11-21 03:34:49.815359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:26:02.404 [2024-11-21 03:34:49.815368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.404 [2024-11-21 03:34:49.815520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.404 [2024-11-21 03:34:49.815545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.405 [2024-11-21 03:34:49.815555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:26:02.405 [2024-11-21 03:34:49.815564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.820648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.820682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.405 [2024-11-21 03:34:49.820692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.062 ms 00:26:02.405 [2024-11-21 03:34:49.820703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.822921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:02.405 [2024-11-21 03:34:49.822950] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:02.405 [2024-11-21 03:34:49.822961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.822969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:02.405 [2024-11-21 03:34:49.822978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:26:02.405 [2024-11-21 03:34:49.822985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.836254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.836278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:02.405 [2024-11-21 03:34:49.836287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.231 ms 00:26:02.405 [2024-11-21 03:34:49.836294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.837704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.837727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:02.405 [2024-11-21 03:34:49.837733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:26:02.405 [2024-11-21 03:34:49.837739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.838828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.838851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:02.405 [2024-11-21 03:34:49.838863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:26:02.405 [2024-11-21 03:34:49.838869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.839138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.839153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:02.405 [2024-11-21 03:34:49.839160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:26:02.405 [2024-11-21 03:34:49.839165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.853274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.853301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:02.405 [2024-11-21 03:34:49.853310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.097 ms 00:26:02.405 [2024-11-21 03:34:49.853316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.859244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:02.405 [2024-11-21 03:34:49.860890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.860916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:02.405 [2024-11-21 03:34:49.860924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.545 ms 00:26:02.405 [2024-11-21 03:34:49.860931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.860968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.860976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:02.405 [2024-11-21 03:34:49.860984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:02.405 [2024-11-21 03:34:49.860989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.861038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.861045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:02.405 [2024-11-21 03:34:49.861051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:02.405 [2024-11-21 03:34:49.861057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.861071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.861078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:02.405 [2024-11-21 03:34:49.861084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.405 [2024-11-21 03:34:49.861091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.861117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:02.405 [2024-11-21 03:34:49.861124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.861130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:02.405 [2024-11-21 03:34:49.861135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:02.405 [2024-11-21 03:34:49.861141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.864002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.864023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:02.405 [2024-11-21 03:34:49.864030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:26:02.405 [2024-11-21 03:34:49.864036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.865924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.405 [2024-11-21 03:34:49.865949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:02.405 [2024-11-21 03:34:49.865961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:02.405 [2024-11-21 03:34:49.865968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.405 [2024-11-21 03:34:49.866783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.856 ms, result 0 00:26:03.349  [2024-11-21T03:34:52.302Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-21T03:34:53.247Z] Copying: 44/1024 [MB] (17 MBps) [2024-11-21T03:34:53.941Z] Copying: 63/1024 [MB] (18 MBps) [2024-11-21T03:34:54.886Z] Copying: 75/1024 [MB] (12 MBps) [2024-11-21T03:34:56.274Z] Copying: 89/1024 [MB] (14 MBps) [2024-11-21T03:34:57.219Z] Copying: 105/1024 [MB] (15 MBps) [2024-11-21T03:34:58.162Z] Copying: 120/1024 [MB] (15 MBps) [2024-11-21T03:34:59.109Z] Copying: 137/1024 [MB] (17 MBps) [2024-11-21T03:35:00.066Z] Copying: 151/1024 [MB] (13 MBps) [2024-11-21T03:35:01.009Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-21T03:35:01.953Z] Copying: 175/1024 [MB] (12 MBps) [2024-11-21T03:35:02.926Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-21T03:35:04.313Z] Copying: 205/1024 [MB] (19 MBps) [2024-11-21T03:35:04.886Z] Copying: 223/1024 [MB] (18 MBps) [2024-11-21T03:35:06.274Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-21T03:35:07.217Z] Copying: 249704/1048576 [kB] (10176 kBps) [2024-11-21T03:35:08.162Z] Copying: 260/1024 [MB] (16 MBps) [2024-11-21T03:35:09.107Z] Copying: 279/1024 [MB] (19 MBps) [2024-11-21T03:35:10.055Z] Copying: 303/1024 [MB] (24 MBps) [2024-11-21T03:35:11.001Z] Copying: 322/1024 [MB] (18 MBps) [2024-11-21T03:35:11.946Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-21T03:35:12.892Z] Copying: 350/1024 [MB] (12 MBps) [2024-11-21T03:35:14.279Z] Copying: 365/1024 [MB] (15 MBps) [2024-11-21T03:35:15.225Z] Copying: 388/1024 [MB] (22 MBps) [2024-11-21T03:35:16.170Z] Copying: 408/1024 [MB] (20 MBps) [2024-11-21T03:35:17.115Z] Copying: 436/1024 [MB] (27 MBps) [2024-11-21T03:35:18.059Z] Copying: 458/1024 [MB] (21 MBps) [2024-11-21T03:35:19.003Z] Copying: 474/1024 [MB] (16 MBps) [2024-11-21T03:35:19.948Z] Copying: 488/1024 [MB] (14 MBps) [2024-11-21T03:35:20.892Z] Copying: 505/1024 [MB] (16 MBps) [2024-11-21T03:35:22.279Z] Copying: 517/1024 [MB] (12 MBps) [2024-11-21T03:35:23.222Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-21T03:35:24.167Z] Copying: 545/1024 [MB] (17 MBps) [2024-11-21T03:35:25.109Z] Copying: 560/1024 [MB] (14 MBps) [2024-11-21T03:35:26.050Z] Copying: 581/1024 [MB] (21 MBps) [2024-11-21T03:35:26.989Z] Copying: 620/1024 [MB] (38 MBps) [2024-11-21T03:35:27.931Z] Copying: 642/1024 [MB] (22 MBps) [2024-11-21T03:35:29.318Z] Copying: 656/1024 [MB] (13 MBps) [2024-11-21T03:35:29.890Z] Copying: 668/1024 [MB] (12 MBps) [2024-11-21T03:35:31.280Z] Copying: 684/1024 [MB] (15 MBps) [2024-11-21T03:35:32.226Z] Copying: 708/1024 [MB] (24 MBps) [2024-11-21T03:35:33.172Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-21T03:35:34.118Z] Copying: 744/1024 [MB] (20 MBps) [2024-11-21T03:35:35.064Z] Copying: 764/1024 [MB] (20 MBps) [2024-11-21T03:35:36.079Z] Copying: 783/1024 [MB] (19 MBps) [2024-11-21T03:35:37.057Z] Copying: 803/1024 [MB] (19 MBps) [2024-11-21T03:35:38.001Z] Copying: 822/1024 [MB] (18 MBps) [2024-11-21T03:35:38.946Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-21T03:35:39.888Z] Copying: 856/1024 [MB] (16 MBps) [2024-11-21T03:35:41.275Z] Copying: 874/1024 [MB] (18 MBps) [2024-11-21T03:35:42.218Z] Copying: 892/1024 [MB] (17 MBps) [2024-11-21T03:35:43.162Z] Copying: 904/1024 [MB] (12 MBps) [2024-11-21T03:35:44.106Z] Copying: 921/1024 [MB] (17 MBps) [2024-11-21T03:35:45.048Z] Copying: 938/1024 [MB] (16 MBps) [2024-11-21T03:35:45.991Z] Copying: 962/1024 [MB] (24 MBps) [2024-11-21T03:35:46.935Z] Copying: 995/1024 [MB] (32 MBps) [2024-11-21T03:35:48.323Z] Copying: 1014/1024 [MB] (19 MBps) [2024-11-21T03:35:48.323Z] Copying: 1048248/1048576 [kB] (9128 kBps) [2024-11-21T03:35:48.323Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 03:35:48.168816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.168889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.758 [2024-11-21 03:35:48.168916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:00.758 [2024-11-21 03:35:48.168926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.169649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.758 [2024-11-21 03:35:48.172398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.172445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.758 [2024-11-21 03:35:48.172456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:27:00.758 [2024-11-21 03:35:48.172466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.184097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.184151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.758 [2024-11-21 03:35:48.184166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.297 ms 00:27:00.758 [2024-11-21 03:35:48.184175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.207798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.207853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.758 [2024-11-21 03:35:48.207866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.605 ms 00:27:00.758 [2024-11-21 03:35:48.207874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.214020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.214057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.758 [2024-11-21 03:35:48.214069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:27:00.758 [2024-11-21 03:35:48.214078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.217018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.217058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.758 [2024-11-21 03:35:48.217068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:27:00.758 [2024-11-21 03:35:48.217077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.758 [2024-11-21 03:35:48.221967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.758 [2024-11-21 03:35:48.222010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.758 [2024-11-21 03:35:48.222021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:27:00.758 [2024-11-21 03:35:48.222029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.021 [2024-11-21 03:35:48.352353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.022 [2024-11-21 03:35:48.352416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:01.022 [2024-11-21 03:35:48.352428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.266 ms 00:27:01.022 [2024-11-21 03:35:48.352453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.022 [2024-11-21 03:35:48.355176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.022 [2024-11-21 03:35:48.355219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:01.022 [2024-11-21 03:35:48.355230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:27:01.022 [2024-11-21 03:35:48.355237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.022 [2024-11-21 03:35:48.357457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.022 [2024-11-21 03:35:48.357497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:01.022 [2024-11-21 03:35:48.357507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:27:01.022 [2024-11-21 03:35:48.357515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.022 [2024-11-21 03:35:48.359735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.022 [2024-11-21 03:35:48.359778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:01.022 [2024-11-21 03:35:48.359787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:27:01.022 [2024-11-21 03:35:48.359795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.022 [2024-11-21 03:35:48.362158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.022 [2024-11-21 03:35:48.362197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:01.022 [2024-11-21 03:35:48.362207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:27:01.022 [2024-11-21 03:35:48.362214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.022 [2024-11-21 03:35:48.362252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:01.022 [2024-11-21 03:35:48.362275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 90624 / 261120 wr_cnt: 1 state: open 00:27:01.022 [2024-11-21 03:35:48.362286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.362965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:01.022 [2024-11-21 03:35:48.363089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:01.023 [2024-11-21 03:35:48.363146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:01.023 [2024-11-21 03:35:48.363159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b77f572-ca96-4772-ae1d-10eeb1a8dd34 00:27:01.023 [2024-11-21 03:35:48.363168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 90624 00:27:01.023 [2024-11-21 03:35:48.363175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 91584 00:27:01.023 [2024-11-21 03:35:48.363182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 90624 00:27:01.023 [2024-11-21 03:35:48.363191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0106 00:27:01.023 [2024-11-21 03:35:48.363199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:01.023 [2024-11-21 03:35:48.363207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:01.023 [2024-11-21 03:35:48.363215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:01.023 [2024-11-21 03:35:48.363222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:01.023 [2024-11-21 03:35:48.363236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:01.023 [2024-11-21 03:35:48.363244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.023 [2024-11-21 03:35:48.363252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:01.023 [2024-11-21 03:35:48.363263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:27:01.023 [2024-11-21 03:35:48.363271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.365534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.023 [2024-11-21 03:35:48.365568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:01.023 [2024-11-21 03:35:48.365579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:27:01.023 [2024-11-21 03:35:48.365588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.365717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.023 [2024-11-21 03:35:48.365726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:01.023 [2024-11-21 03:35:48.365740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:27:01.023 [2024-11-21 03:35:48.365749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.373218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.373262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.023 [2024-11-21 03:35:48.373273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.373286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.373354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.373364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.023 [2024-11-21 03:35:48.373373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.373381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.373444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.373454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.023 [2024-11-21 03:35:48.373466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.373474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.373490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.373502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.023 [2024-11-21 03:35:48.373513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.373524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.387395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.387440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.023 [2024-11-21 03:35:48.387450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.387459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.397995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.023 [2024-11-21 03:35:48.398064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.023 [2024-11-21 03:35:48.398196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.023 [2024-11-21 03:35:48.398264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.023 [2024-11-21 03:35:48.398368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:01.023 [2024-11-21 03:35:48.398433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.023 [2024-11-21 03:35:48.398506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.023 [2024-11-21 03:35:48.398572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.023 [2024-11-21 03:35:48.398587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.023 [2024-11-21 03:35:48.398595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.023 [2024-11-21 03:35:48.398725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 233.952 ms, result 0 00:27:02.410 00:27:02.410 00:27:02.410 03:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:04.959 03:35:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:04.959 [2024-11-21 03:35:51.970151] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:27:04.959 [2024-11-21 03:35:51.970300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94032 ] 00:27:04.959 [2024-11-21 03:35:52.106677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.959 [2024-11-21 03:35:52.133814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.959 [2024-11-21 03:35:52.162420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.959 [2024-11-21 03:35:52.277108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.959 [2024-11-21 03:35:52.277204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.959 [2024-11-21 03:35:52.438558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.438625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:04.959 [2024-11-21 03:35:52.438645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.959 [2024-11-21 03:35:52.438654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.438713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.438724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.959 [2024-11-21 03:35:52.438738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:04.959 [2024-11-21 03:35:52.438748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.438772] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:04.959 [2024-11-21 03:35:52.439075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:04.959 [2024-11-21 03:35:52.439106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.439114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.959 [2024-11-21 03:35:52.439126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:27:04.959 [2024-11-21 03:35:52.439135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.440964] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:04.959 [2024-11-21 03:35:52.444714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.444770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:04.959 [2024-11-21 03:35:52.444781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:27:04.959 [2024-11-21 03:35:52.444800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.444877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.444887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:04.959 [2024-11-21 03:35:52.444914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:04.959 [2024-11-21 03:35:52.444923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.452939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.452984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.959 [2024-11-21 03:35:52.453004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.970 ms 00:27:04.959 [2024-11-21 03:35:52.453012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.959 [2024-11-21 03:35:52.453108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.959 [2024-11-21 03:35:52.453118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.959 [2024-11-21 03:35:52.453127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:04.960 [2024-11-21 03:35:52.453135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.453195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.960 [2024-11-21 03:35:52.453205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:04.960 [2024-11-21 03:35:52.453214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:04.960 [2024-11-21 03:35:52.453225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.453247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:04.960 [2024-11-21 03:35:52.455377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.960 [2024-11-21 03:35:52.455413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.960 [2024-11-21 03:35:52.455432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:27:04.960 [2024-11-21 03:35:52.455442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.455478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.960 [2024-11-21 03:35:52.455486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:04.960 [2024-11-21 03:35:52.455495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:04.960 [2024-11-21 03:35:52.455506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.455528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:04.960 [2024-11-21 03:35:52.455549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:04.960 [2024-11-21 03:35:52.455590] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:04.960 [2024-11-21 03:35:52.455607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:04.960 [2024-11-21 03:35:52.455713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:04.960 [2024-11-21 03:35:52.455724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:04.960 [2024-11-21 03:35:52.455739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:04.960 [2024-11-21 03:35:52.455750] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:04.960 [2024-11-21 03:35:52.455759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:04.960 [2024-11-21 03:35:52.455768] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:04.960 [2024-11-21 03:35:52.455775] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:04.960 [2024-11-21 03:35:52.455785] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:04.960 [2024-11-21 03:35:52.455794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:04.960 [2024-11-21 03:35:52.455802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.960 [2024-11-21 03:35:52.455813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:04.960 [2024-11-21 03:35:52.455821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:27:04.960 [2024-11-21 03:35:52.455828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.455929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.960 [2024-11-21 03:35:52.455939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:04.960 [2024-11-21 03:35:52.455947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:27:04.960 [2024-11-21 03:35:52.455955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.960 [2024-11-21 03:35:52.456057] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:04.960 [2024-11-21 03:35:52.456069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:04.960 [2024-11-21 03:35:52.456080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:04.960 [2024-11-21 03:35:52.456108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:04.960 [2024-11-21 03:35:52.456141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.960 [2024-11-21 03:35:52.456157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:04.960 [2024-11-21 03:35:52.456165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:04.960 [2024-11-21 03:35:52.456172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.960 [2024-11-21 03:35:52.456181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:04.960 [2024-11-21 03:35:52.456189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:04.960 [2024-11-21 03:35:52.456199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:04.960 [2024-11-21 03:35:52.456215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:04.960 [2024-11-21 03:35:52.456239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:04.960 [2024-11-21 03:35:52.456268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:04.960 [2024-11-21 03:35:52.456291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:04.960 [2024-11-21 03:35:52.456313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:04.960 [2024-11-21 03:35:52.456334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.960 [2024-11-21 03:35:52.456350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:04.960 [2024-11-21 03:35:52.456358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:04.960 [2024-11-21 03:35:52.456366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.960 [2024-11-21 03:35:52.456378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:04.960 [2024-11-21 03:35:52.456386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:04.960 [2024-11-21 03:35:52.456394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:04.960 [2024-11-21 03:35:52.456409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:04.960 [2024-11-21 03:35:52.456417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:04.960 [2024-11-21 03:35:52.456437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:04.960 [2024-11-21 03:35:52.456446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.960 [2024-11-21 03:35:52.456463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:04.960 [2024-11-21 03:35:52.456472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:04.960 [2024-11-21 03:35:52.456480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:04.960 [2024-11-21 03:35:52.456488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:04.960 [2024-11-21 03:35:52.456496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:04.960 [2024-11-21 03:35:52.456504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:04.960 [2024-11-21 03:35:52.456516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:04.960 [2024-11-21 03:35:52.456527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.960 [2024-11-21 03:35:52.456540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:04.960 [2024-11-21 03:35:52.456548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:04.960 [2024-11-21 03:35:52.456555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:04.960 [2024-11-21 03:35:52.456562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:04.960 [2024-11-21 03:35:52.456569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:04.960 [2024-11-21 03:35:52.456576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:04.960 [2024-11-21 03:35:52.456584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:04.960 [2024-11-21 03:35:52.456592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:04.960 [2024-11-21 03:35:52.456599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:04.960 [2024-11-21 03:35:52.456606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:04.960 [2024-11-21 03:35:52.456613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:04.961 [2024-11-21 03:35:52.456621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:04.961 [2024-11-21 03:35:52.456629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:04.961 [2024-11-21 03:35:52.456636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:04.961 [2024-11-21 03:35:52.456647] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:04.961 [2024-11-21 03:35:52.456656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.961 [2024-11-21 03:35:52.456664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.961 [2024-11-21 03:35:52.456671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:04.961 [2024-11-21 03:35:52.456683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:04.961 [2024-11-21 03:35:52.456691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:04.961 [2024-11-21 03:35:52.456698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.456706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:04.961 [2024-11-21 03:35:52.456714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:27:04.961 [2024-11-21 03:35:52.456721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.470664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.470719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.961 [2024-11-21 03:35:52.470732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.895 ms 00:27:04.961 [2024-11-21 03:35:52.470745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.470829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.470839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.961 [2024-11-21 03:35:52.470849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:04.961 [2024-11-21 03:35:52.470858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.493163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.493251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.961 [2024-11-21 03:35:52.493277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.226 ms 00:27:04.961 [2024-11-21 03:35:52.493294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.493373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.493395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.961 [2024-11-21 03:35:52.493412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:04.961 [2024-11-21 03:35:52.493428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.494152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.494204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.961 [2024-11-21 03:35:52.494221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:27:04.961 [2024-11-21 03:35:52.494235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.494468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.494485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.961 [2024-11-21 03:35:52.494499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:27:04.961 [2024-11-21 03:35:52.494513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.503602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.503657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.961 [2024-11-21 03:35:52.503668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.050 ms 00:27:04.961 [2024-11-21 03:35:52.503676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.961 [2024-11-21 03:35:52.507646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:04.961 [2024-11-21 03:35:52.507700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.961 [2024-11-21 03:35:52.507716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.961 [2024-11-21 03:35:52.507725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.961 [2024-11-21 03:35:52.507734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.939 ms 00:27:04.961 [2024-11-21 03:35:52.507741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.524029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.524084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:05.223 [2024-11-21 03:35:52.524097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.232 ms 00:27:05.223 [2024-11-21 03:35:52.524106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.527275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.527326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:05.223 [2024-11-21 03:35:52.527336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:27:05.223 [2024-11-21 03:35:52.527344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.530166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.530210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:05.223 [2024-11-21 03:35:52.530220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:27:05.223 [2024-11-21 03:35:52.530227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.530573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.530586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:05.223 [2024-11-21 03:35:52.530601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:27:05.223 [2024-11-21 03:35:52.530609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.556311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.556369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:05.223 [2024-11-21 03:35:52.556382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.680 ms 00:27:05.223 [2024-11-21 03:35:52.556391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.564746] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:05.223 [2024-11-21 03:35:52.567961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.568016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:05.223 [2024-11-21 03:35:52.568029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.513 ms 00:27:05.223 [2024-11-21 03:35:52.568038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.568117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.568133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:05.223 [2024-11-21 03:35:52.568142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:05.223 [2024-11-21 03:35:52.568151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.569833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.569886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:05.223 [2024-11-21 03:35:52.569945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:27:05.223 [2024-11-21 03:35:52.569954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.569985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.569994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:05.223 [2024-11-21 03:35:52.570003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:05.223 [2024-11-21 03:35:52.570011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.570052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:05.223 [2024-11-21 03:35:52.570064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.570079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:05.223 [2024-11-21 03:35:52.570091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:05.223 [2024-11-21 03:35:52.570099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.576053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.576105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:05.223 [2024-11-21 03:35:52.576117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:27:05.223 [2024-11-21 03:35:52.576135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.576228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.223 [2024-11-21 03:35:52.576239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:05.223 [2024-11-21 03:35:52.576248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:05.223 [2024-11-21 03:35:52.576263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.223 [2024-11-21 03:35:52.577450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.437 ms, result 0 00:27:06.610  [2024-11-21T03:35:55.118Z] Copying: 1164/1048576 [kB] (1164 kBps) [2024-11-21T03:35:56.062Z] Copying: 4256/1048576 [kB] (3092 kBps) [2024-11-21T03:35:57.005Z] Copying: 31/1024 [MB] (27 MBps) [2024-11-21T03:35:57.949Z] Copying: 67/1024 [MB] (35 MBps) [2024-11-21T03:35:58.893Z] Copying: 94/1024 [MB] (27 MBps) [2024-11-21T03:35:59.834Z] Copying: 123/1024 [MB] (29 MBps) [2024-11-21T03:36:00.778Z] Copying: 151/1024 [MB] (27 MBps) [2024-11-21T03:36:02.165Z] Copying: 189/1024 [MB] (38 MBps) [2024-11-21T03:36:03.112Z] Copying: 210/1024 [MB] (21 MBps) [2024-11-21T03:36:04.096Z] Copying: 240/1024 [MB] (30 MBps) [2024-11-21T03:36:05.050Z] Copying: 267/1024 [MB] (26 MBps) [2024-11-21T03:36:05.997Z] Copying: 284/1024 [MB] (16 MBps) [2024-11-21T03:36:06.943Z] Copying: 309/1024 [MB] (25 MBps) [2024-11-21T03:36:07.889Z] Copying: 337/1024 [MB] (27 MBps) [2024-11-21T03:36:08.834Z] Copying: 367/1024 [MB] (29 MBps) [2024-11-21T03:36:09.778Z] Copying: 394/1024 [MB] (26 MBps) [2024-11-21T03:36:11.164Z] Copying: 422/1024 [MB] (28 MBps) [2024-11-21T03:36:12.109Z] Copying: 446/1024 [MB] (24 MBps) [2024-11-21T03:36:13.054Z] Copying: 471/1024 [MB] (24 MBps) [2024-11-21T03:36:13.999Z] Copying: 499/1024 [MB] (27 MBps) [2024-11-21T03:36:14.947Z] Copying: 528/1024 [MB] (28 MBps) [2024-11-21T03:36:15.890Z] Copying: 554/1024 [MB] (26 MBps) [2024-11-21T03:36:16.833Z] Copying: 580/1024 [MB] (25 MBps) [2024-11-21T03:36:17.778Z] Copying: 607/1024 [MB] (27 MBps) [2024-11-21T03:36:19.165Z] Copying: 641/1024 [MB] (33 MBps) [2024-11-21T03:36:20.108Z] Copying: 670/1024 [MB] (28 MBps) [2024-11-21T03:36:21.051Z] Copying: 697/1024 [MB] (27 MBps) [2024-11-21T03:36:21.996Z] Copying: 725/1024 [MB] (27 MBps) [2024-11-21T03:36:22.940Z] Copying: 744/1024 [MB] (18 MBps) [2024-11-21T03:36:23.886Z] Copying: 767/1024 [MB] (23 MBps) [2024-11-21T03:36:24.830Z] Copying: 798/1024 [MB] (30 MBps) [2024-11-21T03:36:25.775Z] Copying: 822/1024 [MB] (24 MBps) [2024-11-21T03:36:27.159Z] Copying: 853/1024 [MB] (31 MBps) [2024-11-21T03:36:28.103Z] Copying: 881/1024 [MB] (27 MBps) [2024-11-21T03:36:29.047Z] Copying: 909/1024 [MB] (27 MBps) [2024-11-21T03:36:29.990Z] Copying: 930/1024 [MB] (21 MBps) [2024-11-21T03:36:30.934Z] Copying: 947/1024 [MB] (17 MBps) [2024-11-21T03:36:31.878Z] Copying: 967/1024 [MB] (19 MBps) [2024-11-21T03:36:32.450Z] Copying: 1001/1024 [MB] (34 MBps) [2024-11-21T03:36:34.367Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-21 03:36:33.958228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.958330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.802 [2024-11-21 03:36:33.958362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:46.802 [2024-11-21 03:36:33.958384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.958577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.802 [2024-11-21 03:36:33.959344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.959392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.802 [2024-11-21 03:36:33.959419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:27:46.802 [2024-11-21 03:36:33.959453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.960057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.960100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.802 [2024-11-21 03:36:33.960121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:46.802 [2024-11-21 03:36:33.960141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.974640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.974679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:46.802 [2024-11-21 03:36:33.974695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:27:46.802 [2024-11-21 03:36:33.974703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.980872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.980910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:46.802 [2024-11-21 03:36:33.980920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:27:46.802 [2024-11-21 03:36:33.980927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.983308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.983341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:46.802 [2024-11-21 03:36:33.983349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:27:46.802 [2024-11-21 03:36:33.983357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.986547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.986581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:46.802 [2024-11-21 03:36:33.986596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:27:46.802 [2024-11-21 03:36:33.986604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.988685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.988715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:46.802 [2024-11-21 03:36:33.988724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:27:46.802 [2024-11-21 03:36:33.988732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.990781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.990815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:46.802 [2024-11-21 03:36:33.990824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:27:46.802 [2024-11-21 03:36:33.990831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.992149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.992181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:46.802 [2024-11-21 03:36:33.992189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:27:46.802 [2024-11-21 03:36:33.992196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.993368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.802 [2024-11-21 03:36:33.993399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:46.802 [2024-11-21 03:36:33.993408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:27:46.802 [2024-11-21 03:36:33.993414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.802 [2024-11-21 03:36:33.994475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.803 [2024-11-21 03:36:33.994508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:46.803 [2024-11-21 03:36:33.994516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:27:46.803 [2024-11-21 03:36:33.994523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.803 [2024-11-21 03:36:33.994550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.803 [2024-11-21 03:36:33.994564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.803 [2024-11-21 03:36:33.994574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:46.803 [2024-11-21 03:36:33.994582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.994998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.803 [2024-11-21 03:36:33.995218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.804 [2024-11-21 03:36:33.995329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.804 [2024-11-21 03:36:33.995339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b77f572-ca96-4772-ae1d-10eeb1a8dd34 00:27:46.804 [2024-11-21 03:36:33.995349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:46.804 [2024-11-21 03:36:33.995357] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 174016 00:27:46.804 [2024-11-21 03:36:33.995364] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 172032 00:27:46.804 [2024-11-21 03:36:33.995371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0115 00:27:46.804 [2024-11-21 03:36:33.995378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.804 [2024-11-21 03:36:33.995386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.804 [2024-11-21 03:36:33.995393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.804 [2024-11-21 03:36:33.995399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.804 [2024-11-21 03:36:33.995406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.804 [2024-11-21 03:36:33.995413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.804 [2024-11-21 03:36:33.995420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.804 [2024-11-21 03:36:33.995428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:27:46.804 [2024-11-21 03:36:33.995435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:33.996838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.804 [2024-11-21 03:36:33.996869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.804 [2024-11-21 03:36:33.996877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:27:46.804 [2024-11-21 03:36:33.996885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:33.996973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.804 [2024-11-21 03:36:33.996982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.804 [2024-11-21 03:36:33.996995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:46.804 [2024-11-21 03:36:33.997003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.001840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.001872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.804 [2024-11-21 03:36:34.001881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.001889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.001947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.001955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.804 [2024-11-21 03:36:34.001968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.001975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.002024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.002033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.804 [2024-11-21 03:36:34.002041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.002047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.002061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.002069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.804 [2024-11-21 03:36:34.002076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.002086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.010714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.010756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.804 [2024-11-21 03:36:34.010765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.010776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.017884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.017931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.804 [2024-11-21 03:36:34.017947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.017955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.017978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.017986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.804 [2024-11-21 03:36:34.017994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.018054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.804 [2024-11-21 03:36:34.018062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.018140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.804 [2024-11-21 03:36:34.018148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.018204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:46.804 [2024-11-21 03:36:34.018211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.018268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.804 [2024-11-21 03:36:34.018275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.804 [2024-11-21 03:36:34.018331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.804 [2024-11-21 03:36:34.018338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.804 [2024-11-21 03:36:34.018349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.804 [2024-11-21 03:36:34.018460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.265 ms, result 0 00:27:46.804 00:27:46.804 00:27:46.804 03:36:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:48.719 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:48.719 03:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:48.719 [2024-11-21 03:36:36.156465] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:27:48.719 [2024-11-21 03:36:36.156555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94483 ] 00:27:48.719 [2024-11-21 03:36:36.281328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:48.980 [2024-11-21 03:36:36.312889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.980 [2024-11-21 03:36:36.333648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.980 [2024-11-21 03:36:36.433053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:48.980 [2024-11-21 03:36:36.433131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.242 [2024-11-21 03:36:36.595279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.242 [2024-11-21 03:36:36.595341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:49.242 [2024-11-21 03:36:36.595356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:49.242 [2024-11-21 03:36:36.595365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.242 [2024-11-21 03:36:36.595425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.242 [2024-11-21 03:36:36.595437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.242 [2024-11-21 03:36:36.595450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:49.242 [2024-11-21 03:36:36.595458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.242 [2024-11-21 03:36:36.595486] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:49.242 [2024-11-21 03:36:36.595921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:49.242 [2024-11-21 03:36:36.595954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.242 [2024-11-21 03:36:36.595962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.242 [2024-11-21 03:36:36.595976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:27:49.242 [2024-11-21 03:36:36.595984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.242 [2024-11-21 03:36:36.597817] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:49.242 [2024-11-21 03:36:36.601952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.242 [2024-11-21 03:36:36.602008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:49.242 [2024-11-21 03:36:36.602020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.137 ms 00:27:49.243 [2024-11-21 03:36:36.602044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.602123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.602139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:49.243 [2024-11-21 03:36:36.602149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:49.243 [2024-11-21 03:36:36.602156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.610849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.610913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.243 [2024-11-21 03:36:36.610929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.635 ms 00:27:49.243 [2024-11-21 03:36:36.610938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.611042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.611058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.243 [2024-11-21 03:36:36.611068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:49.243 [2024-11-21 03:36:36.611075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.611137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.611155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:49.243 [2024-11-21 03:36:36.611163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:49.243 [2024-11-21 03:36:36.611175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.611198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:49.243 [2024-11-21 03:36:36.613164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.613212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.243 [2024-11-21 03:36:36.613223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:27:49.243 [2024-11-21 03:36:36.613234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.613269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.613281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:49.243 [2024-11-21 03:36:36.613290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:49.243 [2024-11-21 03:36:36.613301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.613324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:49.243 [2024-11-21 03:36:36.613345] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:49.243 [2024-11-21 03:36:36.613383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:49.243 [2024-11-21 03:36:36.613400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:49.243 [2024-11-21 03:36:36.613507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:49.243 [2024-11-21 03:36:36.613521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:49.243 [2024-11-21 03:36:36.613536] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:49.243 [2024-11-21 03:36:36.613547] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:49.243 [2024-11-21 03:36:36.613560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:49.243 [2024-11-21 03:36:36.613569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:49.243 [2024-11-21 03:36:36.613577] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:49.243 [2024-11-21 03:36:36.613585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:49.243 [2024-11-21 03:36:36.613599] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:49.243 [2024-11-21 03:36:36.613610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.613618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:49.243 [2024-11-21 03:36:36.613627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:27:49.243 [2024-11-21 03:36:36.613634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.613720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.243 [2024-11-21 03:36:36.613734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:49.243 [2024-11-21 03:36:36.613743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:49.243 [2024-11-21 03:36:36.613754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.243 [2024-11-21 03:36:36.613851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:49.243 [2024-11-21 03:36:36.613870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:49.243 [2024-11-21 03:36:36.613879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.243 [2024-11-21 03:36:36.613888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.613914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:49.243 [2024-11-21 03:36:36.613922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.613929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:49.243 [2024-11-21 03:36:36.613936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:49.243 [2024-11-21 03:36:36.613955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:49.243 [2024-11-21 03:36:36.613962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.243 [2024-11-21 03:36:36.613969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:49.243 [2024-11-21 03:36:36.613976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:49.243 [2024-11-21 03:36:36.613982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.243 [2024-11-21 03:36:36.613990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:49.243 [2024-11-21 03:36:36.613997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:49.243 [2024-11-21 03:36:36.614004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:49.243 [2024-11-21 03:36:36.614022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:49.243 [2024-11-21 03:36:36.614044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:49.243 [2024-11-21 03:36:36.614066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:49.243 [2024-11-21 03:36:36.614096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:49.243 [2024-11-21 03:36:36.614117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:49.243 [2024-11-21 03:36:36.614137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.243 [2024-11-21 03:36:36.614150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:49.243 [2024-11-21 03:36:36.614157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:49.243 [2024-11-21 03:36:36.614163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.243 [2024-11-21 03:36:36.614181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:49.243 [2024-11-21 03:36:36.614188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:49.243 [2024-11-21 03:36:36.614195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:49.243 [2024-11-21 03:36:36.614212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:49.243 [2024-11-21 03:36:36.614220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614227] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:49.243 [2024-11-21 03:36:36.614238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:49.243 [2024-11-21 03:36:36.614250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.243 [2024-11-21 03:36:36.614268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:49.243 [2024-11-21 03:36:36.614278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:49.243 [2024-11-21 03:36:36.614285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:49.243 [2024-11-21 03:36:36.614293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:49.243 [2024-11-21 03:36:36.614301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:49.243 [2024-11-21 03:36:36.614308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:49.243 [2024-11-21 03:36:36.614317] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:49.243 [2024-11-21 03:36:36.614327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.243 [2024-11-21 03:36:36.614337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:49.243 [2024-11-21 03:36:36.614347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:49.244 [2024-11-21 03:36:36.614355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:49.244 [2024-11-21 03:36:36.614363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:49.244 [2024-11-21 03:36:36.614370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:49.244 [2024-11-21 03:36:36.614378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:49.244 [2024-11-21 03:36:36.614387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:49.244 [2024-11-21 03:36:36.614395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:49.244 [2024-11-21 03:36:36.614402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:49.244 [2024-11-21 03:36:36.614410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:49.244 [2024-11-21 03:36:36.614447] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:49.244 [2024-11-21 03:36:36.614455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.244 [2024-11-21 03:36:36.614475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:49.244 [2024-11-21 03:36:36.614483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:49.244 [2024-11-21 03:36:36.614490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:49.244 [2024-11-21 03:36:36.614498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.614505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:49.244 [2024-11-21 03:36:36.614512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:27:49.244 [2024-11-21 03:36:36.614524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.628773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.628827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.244 [2024-11-21 03:36:36.628839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.198 ms 00:27:49.244 [2024-11-21 03:36:36.628848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.628953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.628964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:49.244 [2024-11-21 03:36:36.628974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:49.244 [2024-11-21 03:36:36.628983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.650607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.650666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.244 [2024-11-21 03:36:36.650680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.564 ms 00:27:49.244 [2024-11-21 03:36:36.650691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.650739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.650757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.244 [2024-11-21 03:36:36.650767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:49.244 [2024-11-21 03:36:36.650777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.651379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.651423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.244 [2024-11-21 03:36:36.651435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:27:49.244 [2024-11-21 03:36:36.651444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.651614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.651626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.244 [2024-11-21 03:36:36.651636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:27:49.244 [2024-11-21 03:36:36.651645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.659357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.659405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.244 [2024-11-21 03:36:36.659416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.689 ms 00:27:49.244 [2024-11-21 03:36:36.659431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.663175] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:49.244 [2024-11-21 03:36:36.663233] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:49.244 [2024-11-21 03:36:36.663250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.663258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:49.244 [2024-11-21 03:36:36.663267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:27:49.244 [2024-11-21 03:36:36.663274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.679585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.679640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:49.244 [2024-11-21 03:36:36.679652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.260 ms 00:27:49.244 [2024-11-21 03:36:36.679660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.682920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.682971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:49.244 [2024-11-21 03:36:36.682982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:27:49.244 [2024-11-21 03:36:36.682990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.685748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.685807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:49.244 [2024-11-21 03:36:36.685817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:27:49.244 [2024-11-21 03:36:36.685824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.686221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.686257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:49.244 [2024-11-21 03:36:36.686273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:27:49.244 [2024-11-21 03:36:36.686281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.711845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.711934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:49.244 [2024-11-21 03:36:36.711948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.537 ms 00:27:49.244 [2024-11-21 03:36:36.711967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.720772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:49.244 [2024-11-21 03:36:36.723860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.723918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:49.244 [2024-11-21 03:36:36.723932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.215 ms 00:27:49.244 [2024-11-21 03:36:36.723941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.724030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.724044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:49.244 [2024-11-21 03:36:36.724053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:49.244 [2024-11-21 03:36:36.724062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.724845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.724887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:49.244 [2024-11-21 03:36:36.724918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:27:49.244 [2024-11-21 03:36:36.724926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.724954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.724962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:49.244 [2024-11-21 03:36:36.724971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:49.244 [2024-11-21 03:36:36.724979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.725024] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:49.244 [2024-11-21 03:36:36.725038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.725047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:49.244 [2024-11-21 03:36:36.725058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:49.244 [2024-11-21 03:36:36.725066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.730341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.730392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:49.244 [2024-11-21 03:36:36.730402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:27:49.244 [2024-11-21 03:36:36.730411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.244 [2024-11-21 03:36:36.730496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.244 [2024-11-21 03:36:36.730506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:49.244 [2024-11-21 03:36:36.730516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:49.244 [2024-11-21 03:36:36.730661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.245 [2024-11-21 03:36:36.731868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.153 ms, result 0 00:27:50.711  [2024-11-21T03:36:39.222Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-21T03:36:40.164Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-21T03:36:41.105Z] Copying: 50/1024 [MB] (17 MBps) [2024-11-21T03:36:42.047Z] Copying: 68/1024 [MB] (18 MBps) [2024-11-21T03:36:42.987Z] Copying: 87/1024 [MB] (18 MBps) [2024-11-21T03:36:43.930Z] Copying: 105/1024 [MB] (17 MBps) [2024-11-21T03:36:45.316Z] Copying: 121/1024 [MB] (16 MBps) [2024-11-21T03:36:46.262Z] Copying: 136/1024 [MB] (15 MBps) [2024-11-21T03:36:47.206Z] Copying: 153/1024 [MB] (16 MBps) [2024-11-21T03:36:48.151Z] Copying: 166/1024 [MB] (12 MBps) [2024-11-21T03:36:49.096Z] Copying: 187/1024 [MB] (20 MBps) [2024-11-21T03:36:50.039Z] Copying: 203/1024 [MB] (16 MBps) [2024-11-21T03:36:50.985Z] Copying: 213/1024 [MB] (10 MBps) [2024-11-21T03:36:51.930Z] Copying: 230/1024 [MB] (17 MBps) [2024-11-21T03:36:53.319Z] Copying: 244/1024 [MB] (13 MBps) [2024-11-21T03:36:54.263Z] Copying: 272/1024 [MB] (28 MBps) [2024-11-21T03:36:55.208Z] Copying: 294/1024 [MB] (22 MBps) [2024-11-21T03:36:56.153Z] Copying: 318/1024 [MB] (24 MBps) [2024-11-21T03:36:57.098Z] Copying: 333/1024 [MB] (14 MBps) [2024-11-21T03:36:58.041Z] Copying: 347/1024 [MB] (14 MBps) [2024-11-21T03:36:58.985Z] Copying: 361/1024 [MB] (13 MBps) [2024-11-21T03:36:59.930Z] Copying: 374/1024 [MB] (12 MBps) [2024-11-21T03:37:01.315Z] Copying: 395/1024 [MB] (21 MBps) [2024-11-21T03:37:02.259Z] Copying: 413/1024 [MB] (17 MBps) [2024-11-21T03:37:03.204Z] Copying: 430/1024 [MB] (17 MBps) [2024-11-21T03:37:04.148Z] Copying: 444/1024 [MB] (13 MBps) [2024-11-21T03:37:05.093Z] Copying: 461/1024 [MB] (17 MBps) [2024-11-21T03:37:06.038Z] Copying: 474/1024 [MB] (13 MBps) [2024-11-21T03:37:06.981Z] Copying: 488/1024 [MB] (13 MBps) [2024-11-21T03:37:07.924Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-21T03:37:08.932Z] Copying: 509/1024 [MB] (10 MBps) [2024-11-21T03:37:10.350Z] Copying: 525/1024 [MB] (16 MBps) [2024-11-21T03:37:10.922Z] Copying: 543/1024 [MB] (17 MBps) [2024-11-21T03:37:12.309Z] Copying: 556/1024 [MB] (13 MBps) [2024-11-21T03:37:13.254Z] Copying: 572/1024 [MB] (16 MBps) [2024-11-21T03:37:14.196Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-21T03:37:15.141Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-21T03:37:16.085Z] Copying: 614/1024 [MB] (20 MBps) [2024-11-21T03:37:17.031Z] Copying: 631/1024 [MB] (16 MBps) [2024-11-21T03:37:17.974Z] Copying: 649/1024 [MB] (17 MBps) [2024-11-21T03:37:18.918Z] Copying: 668/1024 [MB] (19 MBps) [2024-11-21T03:37:20.303Z] Copying: 684/1024 [MB] (15 MBps) [2024-11-21T03:37:21.247Z] Copying: 702/1024 [MB] (18 MBps) [2024-11-21T03:37:22.190Z] Copying: 728/1024 [MB] (26 MBps) [2024-11-21T03:37:23.134Z] Copying: 750/1024 [MB] (21 MBps) [2024-11-21T03:37:24.076Z] Copying: 766/1024 [MB] (16 MBps) [2024-11-21T03:37:25.017Z] Copying: 781/1024 [MB] (14 MBps) [2024-11-21T03:37:25.961Z] Copying: 798/1024 [MB] (16 MBps) [2024-11-21T03:37:27.349Z] Copying: 813/1024 [MB] (15 MBps) [2024-11-21T03:37:27.922Z] Copying: 827/1024 [MB] (14 MBps) [2024-11-21T03:37:29.308Z] Copying: 838/1024 [MB] (10 MBps) [2024-11-21T03:37:30.251Z] Copying: 853/1024 [MB] (14 MBps) [2024-11-21T03:37:31.196Z] Copying: 874/1024 [MB] (21 MBps) [2024-11-21T03:37:32.140Z] Copying: 887/1024 [MB] (12 MBps) [2024-11-21T03:37:33.084Z] Copying: 901/1024 [MB] (14 MBps) [2024-11-21T03:37:34.030Z] Copying: 914/1024 [MB] (13 MBps) [2024-11-21T03:37:34.975Z] Copying: 932/1024 [MB] (17 MBps) [2024-11-21T03:37:35.921Z] Copying: 946/1024 [MB] (14 MBps) [2024-11-21T03:37:37.310Z] Copying: 965/1024 [MB] (18 MBps) [2024-11-21T03:37:38.255Z] Copying: 982/1024 [MB] (16 MBps) [2024-11-21T03:37:39.201Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-21T03:37:39.775Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-21T03:37:40.424Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 03:37:40.143962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.144041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:52.859 [2024-11-21 03:37:40.144058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:52.859 [2024-11-21 03:37:40.144073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.144099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:52.859 [2024-11-21 03:37:40.145721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.145760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:52.859 [2024-11-21 03:37:40.145782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:28:52.859 [2024-11-21 03:37:40.145795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.146059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.146071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:52.859 [2024-11-21 03:37:40.146080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:52.859 [2024-11-21 03:37:40.146093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.149638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.149659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:52.859 [2024-11-21 03:37:40.149670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.530 ms 00:28:52.859 [2024-11-21 03:37:40.149679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.156403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.156441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:52.859 [2024-11-21 03:37:40.156453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.700 ms 00:28:52.859 [2024-11-21 03:37:40.156461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.160031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.160078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:52.859 [2024-11-21 03:37:40.160090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 00:28:52.859 [2024-11-21 03:37:40.160098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.165742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.165786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:52.859 [2024-11-21 03:37:40.165798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.597 ms 00:28:52.859 [2024-11-21 03:37:40.165807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.170079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.170122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:52.859 [2024-11-21 03:37:40.170145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:28:52.859 [2024-11-21 03:37:40.170155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.173789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.173831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:52.859 [2024-11-21 03:37:40.173844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:28:52.859 [2024-11-21 03:37:40.173866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.176879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.176932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:52.859 [2024-11-21 03:37:40.176943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:28:52.859 [2024-11-21 03:37:40.176951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.179269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.179309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:52.859 [2024-11-21 03:37:40.179319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:28:52.859 [2024-11-21 03:37:40.179327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.182005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.859 [2024-11-21 03:37:40.182044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:52.859 [2024-11-21 03:37:40.182054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:28:52.859 [2024-11-21 03:37:40.182062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.859 [2024-11-21 03:37:40.182100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:52.859 [2024-11-21 03:37:40.182116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:52.859 [2024-11-21 03:37:40.182127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:52.859 [2024-11-21 03:37:40.182137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:52.859 [2024-11-21 03:37:40.182213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:52.860 [2024-11-21 03:37:40.182953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:52.860 [2024-11-21 03:37:40.182971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b77f572-ca96-4772-ae1d-10eeb1a8dd34 00:28:52.861 [2024-11-21 03:37:40.182980] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:52.861 [2024-11-21 03:37:40.182989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:52.861 [2024-11-21 03:37:40.182997] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:52.861 [2024-11-21 03:37:40.183005] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:52.861 [2024-11-21 03:37:40.183015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:52.861 [2024-11-21 03:37:40.183024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:52.861 [2024-11-21 03:37:40.183032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:52.861 [2024-11-21 03:37:40.183039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:52.861 [2024-11-21 03:37:40.183046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:52.861 [2024-11-21 03:37:40.183053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.861 [2024-11-21 03:37:40.183071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:52.861 [2024-11-21 03:37:40.183080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:28:52.861 [2024-11-21 03:37:40.183094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.185408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.861 [2024-11-21 03:37:40.185444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:52.861 [2024-11-21 03:37:40.185455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:28:52.861 [2024-11-21 03:37:40.185463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.185584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.861 [2024-11-21 03:37:40.185594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:52.861 [2024-11-21 03:37:40.185603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:52.861 [2024-11-21 03:37:40.185610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.193183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.193224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:52.861 [2024-11-21 03:37:40.193234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.193245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.193298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.193307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:52.861 [2024-11-21 03:37:40.193315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.193322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.193390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.193404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:52.861 [2024-11-21 03:37:40.193418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.193426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.193446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.193454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:52.861 [2024-11-21 03:37:40.193462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.193472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.207383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.207433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:52.861 [2024-11-21 03:37:40.207444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.207459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:52.861 [2024-11-21 03:37:40.218213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:52.861 [2024-11-21 03:37:40.218302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:52.861 [2024-11-21 03:37:40.218371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:52.861 [2024-11-21 03:37:40.218466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:52.861 [2024-11-21 03:37:40.218529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:52.861 [2024-11-21 03:37:40.218604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.861 [2024-11-21 03:37:40.218666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:52.861 [2024-11-21 03:37:40.218675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.861 [2024-11-21 03:37:40.218685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.861 [2024-11-21 03:37:40.218814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.860 ms, result 0 00:28:53.122 00:28:53.122 00:28:53.122 03:37:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:55.671 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:55.671 Process with pid 92554 is not found 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92554 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92554 ']' 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92554 00:28:55.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92554) - No such process 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92554 is not found' 00:28:55.671 03:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:55.671 Remove shared memory files 00:28:55.671 03:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:55.671 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:55.671 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:55.671 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:55.933 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:55.933 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:55.933 03:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:55.933 00:28:55.933 real 4m7.971s 00:28:55.933 user 4m33.846s 00:28:55.933 sys 0m29.179s 00:28:55.933 03:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.933 ************************************ 00:28:55.933 END TEST ftl_dirty_shutdown 00:28:55.933 ************************************ 00:28:55.933 03:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.933 03:37:43 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:55.933 03:37:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:55.933 03:37:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.933 03:37:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:55.933 ************************************ 00:28:55.933 START TEST ftl_upgrade_shutdown 00:28:55.933 ************************************ 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:55.933 * Looking for test storage... 00:28:55.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.933 --rc genhtml_branch_coverage=1 00:28:55.933 --rc genhtml_function_coverage=1 00:28:55.933 --rc genhtml_legend=1 00:28:55.933 --rc geninfo_all_blocks=1 00:28:55.933 --rc geninfo_unexecuted_blocks=1 00:28:55.933 00:28:55.933 ' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.933 --rc genhtml_branch_coverage=1 00:28:55.933 --rc genhtml_function_coverage=1 00:28:55.933 --rc genhtml_legend=1 00:28:55.933 --rc geninfo_all_blocks=1 00:28:55.933 --rc geninfo_unexecuted_blocks=1 00:28:55.933 00:28:55.933 ' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.933 --rc genhtml_branch_coverage=1 00:28:55.933 --rc genhtml_function_coverage=1 00:28:55.933 --rc genhtml_legend=1 00:28:55.933 --rc geninfo_all_blocks=1 00:28:55.933 --rc geninfo_unexecuted_blocks=1 00:28:55.933 00:28:55.933 ' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.933 --rc genhtml_branch_coverage=1 00:28:55.933 --rc genhtml_function_coverage=1 00:28:55.933 --rc genhtml_legend=1 00:28:55.933 --rc geninfo_all_blocks=1 00:28:55.933 --rc geninfo_unexecuted_blocks=1 00:28:55.933 00:28:55.933 ' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:55.933 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:55.934 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:56.194 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95236 00:28:56.194 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:56.194 03:37:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95236 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95236 ']' 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.195 03:37:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.195 [2024-11-21 03:37:43.591118] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:28:56.195 [2024-11-21 03:37:43.591291] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95236 ] 00:28:56.195 [2024-11-21 03:37:43.732540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:56.456 [2024-11-21 03:37:43.764723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.456 [2024-11-21 03:37:43.794072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:57.029 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:57.290 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:57.551 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:57.551 { 00:28:57.551 "name": "basen1", 00:28:57.551 "aliases": [ 00:28:57.551 "4a2aea00-7e9f-4409-99fb-c27e008b910c" 00:28:57.551 ], 00:28:57.551 "product_name": "NVMe disk", 00:28:57.551 "block_size": 4096, 00:28:57.551 "num_blocks": 1310720, 00:28:57.551 "uuid": "4a2aea00-7e9f-4409-99fb-c27e008b910c", 00:28:57.551 "numa_id": -1, 00:28:57.551 "assigned_rate_limits": { 00:28:57.551 "rw_ios_per_sec": 0, 00:28:57.551 "rw_mbytes_per_sec": 0, 00:28:57.551 "r_mbytes_per_sec": 0, 00:28:57.551 "w_mbytes_per_sec": 0 00:28:57.551 }, 00:28:57.551 "claimed": true, 00:28:57.551 "claim_type": "read_many_write_one", 00:28:57.551 "zoned": false, 00:28:57.551 "supported_io_types": { 00:28:57.551 "read": true, 00:28:57.551 "write": true, 00:28:57.551 "unmap": true, 00:28:57.551 "flush": true, 00:28:57.551 "reset": true, 00:28:57.551 "nvme_admin": true, 00:28:57.551 "nvme_io": true, 00:28:57.551 "nvme_io_md": false, 00:28:57.551 "write_zeroes": true, 00:28:57.551 "zcopy": false, 00:28:57.551 "get_zone_info": false, 00:28:57.551 "zone_management": false, 00:28:57.551 "zone_append": false, 00:28:57.551 "compare": true, 00:28:57.551 "compare_and_write": false, 00:28:57.551 "abort": true, 00:28:57.551 "seek_hole": false, 00:28:57.551 "seek_data": false, 00:28:57.551 "copy": true, 00:28:57.551 "nvme_iov_md": false 00:28:57.551 }, 00:28:57.551 "driver_specific": { 00:28:57.551 "nvme": [ 00:28:57.551 { 00:28:57.551 "pci_address": "0000:00:11.0", 00:28:57.551 "trid": { 00:28:57.551 "trtype": "PCIe", 00:28:57.551 "traddr": "0000:00:11.0" 00:28:57.551 }, 00:28:57.552 "ctrlr_data": { 00:28:57.552 "cntlid": 0, 00:28:57.552 "vendor_id": "0x1b36", 00:28:57.552 "model_number": "QEMU NVMe Ctrl", 00:28:57.552 "serial_number": "12341", 00:28:57.552 "firmware_revision": "8.0.0", 00:28:57.552 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:57.552 "oacs": { 00:28:57.552 "security": 0, 00:28:57.552 "format": 1, 00:28:57.552 "firmware": 0, 00:28:57.552 "ns_manage": 1 00:28:57.552 }, 00:28:57.552 "multi_ctrlr": false, 00:28:57.552 "ana_reporting": false 00:28:57.552 }, 00:28:57.552 "vs": { 00:28:57.552 "nvme_version": "1.4" 00:28:57.552 }, 00:28:57.552 "ns_data": { 00:28:57.552 "id": 1, 00:28:57.552 "can_share": false 00:28:57.552 } 00:28:57.552 } 00:28:57.552 ], 00:28:57.552 "mp_policy": "active_passive" 00:28:57.552 } 00:28:57.552 } 00:28:57.552 ]' 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:57.552 03:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:57.812 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0574822c-62dd-4131-8a4c-28dc7e4eca60 00:28:57.812 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:57.813 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0574822c-62dd-4131-8a4c-28dc7e4eca60 00:28:58.074 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2f5d8abc-3783-400e-8d00-ca954519fca5 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2f5d8abc-3783-400e-8d00-ca954519fca5 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=21ac2fdd-a5d9-4549-9670-872515ee256b 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 21ac2fdd-a5d9-4549-9670-872515ee256b ]] 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 21ac2fdd-a5d9-4549-9670-872515ee256b 5120 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=21ac2fdd-a5d9-4549-9670-872515ee256b 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 21ac2fdd-a5d9-4549-9670-872515ee256b 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=21ac2fdd-a5d9-4549-9670-872515ee256b 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:58.335 03:37:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21ac2fdd-a5d9-4549-9670-872515ee256b 00:28:58.597 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:58.597 { 00:28:58.597 "name": "21ac2fdd-a5d9-4549-9670-872515ee256b", 00:28:58.597 "aliases": [ 00:28:58.597 "lvs/basen1p0" 00:28:58.597 ], 00:28:58.597 "product_name": "Logical Volume", 00:28:58.597 "block_size": 4096, 00:28:58.597 "num_blocks": 5242880, 00:28:58.597 "uuid": "21ac2fdd-a5d9-4549-9670-872515ee256b", 00:28:58.597 "assigned_rate_limits": { 00:28:58.597 "rw_ios_per_sec": 0, 00:28:58.597 "rw_mbytes_per_sec": 0, 00:28:58.597 "r_mbytes_per_sec": 0, 00:28:58.597 "w_mbytes_per_sec": 0 00:28:58.597 }, 00:28:58.597 "claimed": false, 00:28:58.597 "zoned": false, 00:28:58.597 "supported_io_types": { 00:28:58.597 "read": true, 00:28:58.597 "write": true, 00:28:58.597 "unmap": true, 00:28:58.597 "flush": false, 00:28:58.597 "reset": true, 00:28:58.597 "nvme_admin": false, 00:28:58.597 "nvme_io": false, 00:28:58.597 "nvme_io_md": false, 00:28:58.597 "write_zeroes": true, 00:28:58.597 "zcopy": false, 00:28:58.597 "get_zone_info": false, 00:28:58.597 "zone_management": false, 00:28:58.597 "zone_append": false, 00:28:58.597 "compare": false, 00:28:58.597 "compare_and_write": false, 00:28:58.597 "abort": false, 00:28:58.597 "seek_hole": true, 00:28:58.597 "seek_data": true, 00:28:58.597 "copy": false, 00:28:58.597 "nvme_iov_md": false 00:28:58.597 }, 00:28:58.597 "driver_specific": { 00:28:58.597 "lvol": { 00:28:58.597 "lvol_store_uuid": "2f5d8abc-3783-400e-8d00-ca954519fca5", 00:28:58.597 "base_bdev": "basen1", 00:28:58.597 "thin_provision": true, 00:28:58.597 "num_allocated_clusters": 0, 00:28:58.597 "snapshot": false, 00:28:58.597 "clone": false, 00:28:58.597 "esnap_clone": false 00:28:58.597 } 00:28:58.597 } 00:28:58.597 } 00:28:58.597 ]' 00:28:58.597 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:58.597 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:58.597 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:58.858 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:59.119 03:37:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 21ac2fdd-a5d9-4549-9670-872515ee256b -c cachen1p0 --l2p_dram_limit 2 00:28:59.381 [2024-11-21 03:37:46.835803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.835839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:59.381 [2024-11-21 03:37:46.835851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:59.381 [2024-11-21 03:37:46.835858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.835909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.835917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:59.381 [2024-11-21 03:37:46.835929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:59.381 [2024-11-21 03:37:46.835935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.835951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:59.381 [2024-11-21 03:37:46.836164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:59.381 [2024-11-21 03:37:46.836184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.836191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:59.381 [2024-11-21 03:37:46.836201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:28:59.381 [2024-11-21 03:37:46.836207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.836232] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1278155f-47f9-4744-82dc-0c1ed023fe11 00:28:59.381 [2024-11-21 03:37:46.837202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.837232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:59.381 [2024-11-21 03:37:46.837240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:59.381 [2024-11-21 03:37:46.837248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.841994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.842023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:59.381 [2024-11-21 03:37:46.842031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.711 ms 00:28:59.381 [2024-11-21 03:37:46.842042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.842105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.842114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:59.381 [2024-11-21 03:37:46.842121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:59.381 [2024-11-21 03:37:46.842128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.842158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.842171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:59.381 [2024-11-21 03:37:46.842178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:59.381 [2024-11-21 03:37:46.842185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.842201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:59.381 [2024-11-21 03:37:46.843510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.843539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:59.381 [2024-11-21 03:37:46.843548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.311 ms 00:28:59.381 [2024-11-21 03:37:46.843555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.843579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.843589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:59.381 [2024-11-21 03:37:46.843598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:59.381 [2024-11-21 03:37:46.843605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.843620] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:59.381 [2024-11-21 03:37:46.843724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:59.381 [2024-11-21 03:37:46.843736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:59.381 [2024-11-21 03:37:46.843745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:59.381 [2024-11-21 03:37:46.843759] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:59.381 [2024-11-21 03:37:46.843766] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:59.381 [2024-11-21 03:37:46.843776] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:59.381 [2024-11-21 03:37:46.843782] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:59.381 [2024-11-21 03:37:46.843789] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:59.381 [2024-11-21 03:37:46.843794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:59.381 [2024-11-21 03:37:46.843805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.843811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:59.381 [2024-11-21 03:37:46.843818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:28:59.381 [2024-11-21 03:37:46.843824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.843893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.381 [2024-11-21 03:37:46.843911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:59.381 [2024-11-21 03:37:46.843920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:59.381 [2024-11-21 03:37:46.843927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.381 [2024-11-21 03:37:46.844006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:59.381 [2024-11-21 03:37:46.844018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:59.381 [2024-11-21 03:37:46.844026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:59.381 [2024-11-21 03:37:46.844048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:59.381 [2024-11-21 03:37:46.844060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:59.381 [2024-11-21 03:37:46.844067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:59.381 [2024-11-21 03:37:46.844073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:59.381 [2024-11-21 03:37:46.844086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:59.381 [2024-11-21 03:37:46.844094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:59.381 [2024-11-21 03:37:46.844106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:59.381 [2024-11-21 03:37:46.844111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:59.381 [2024-11-21 03:37:46.844124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:59.381 [2024-11-21 03:37:46.844130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:59.381 [2024-11-21 03:37:46.844141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:59.381 [2024-11-21 03:37:46.844146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:59.381 [2024-11-21 03:37:46.844157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:59.381 [2024-11-21 03:37:46.844163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:59.381 [2024-11-21 03:37:46.844174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:59.381 [2024-11-21 03:37:46.844179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:59.381 [2024-11-21 03:37:46.844196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:59.381 [2024-11-21 03:37:46.844203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:59.381 [2024-11-21 03:37:46.844214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:59.381 [2024-11-21 03:37:46.844219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:59.381 [2024-11-21 03:37:46.844230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:59.381 [2024-11-21 03:37:46.844236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.381 [2024-11-21 03:37:46.844241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:59.381 [2024-11-21 03:37:46.844247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:59.382 [2024-11-21 03:37:46.844252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.382 [2024-11-21 03:37:46.844258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:59.382 [2024-11-21 03:37:46.844263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:59.382 [2024-11-21 03:37:46.844270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.382 [2024-11-21 03:37:46.844275] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:59.382 [2024-11-21 03:37:46.844284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:59.382 [2024-11-21 03:37:46.844289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:59.382 [2024-11-21 03:37:46.844296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:59.382 [2024-11-21 03:37:46.844303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:59.382 [2024-11-21 03:37:46.844310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:59.382 [2024-11-21 03:37:46.844315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:59.382 [2024-11-21 03:37:46.844321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:59.382 [2024-11-21 03:37:46.844326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:59.382 [2024-11-21 03:37:46.844332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:59.382 [2024-11-21 03:37:46.844339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:59.382 [2024-11-21 03:37:46.844348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:59.382 [2024-11-21 03:37:46.844362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:59.382 [2024-11-21 03:37:46.844379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:59.382 [2024-11-21 03:37:46.844386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:59.382 [2024-11-21 03:37:46.844391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:59.382 [2024-11-21 03:37:46.844397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:59.382 [2024-11-21 03:37:46.844438] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:59.382 [2024-11-21 03:37:46.844445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:59.382 [2024-11-21 03:37:46.844457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:59.382 [2024-11-21 03:37:46.844463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:59.382 [2024-11-21 03:37:46.844469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:59.382 [2024-11-21 03:37:46.844476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.382 [2024-11-21 03:37:46.844484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:59.382 [2024-11-21 03:37:46.844490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:28:59.382 [2024-11-21 03:37:46.844496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.382 [2024-11-21 03:37:46.844525] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:59.382 [2024-11-21 03:37:46.844540] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:02.685 [2024-11-21 03:37:50.148504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.148603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:02.685 [2024-11-21 03:37:50.148620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3303.961 ms 00:29:02.685 [2024-11-21 03:37:50.148632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.162783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.162849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:02.685 [2024-11-21 03:37:50.162864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.023 ms 00:29:02.685 [2024-11-21 03:37:50.162883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.162960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.162975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:02.685 [2024-11-21 03:37:50.162994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:02.685 [2024-11-21 03:37:50.163005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.175506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.175566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:02.685 [2024-11-21 03:37:50.175581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.448 ms 00:29:02.685 [2024-11-21 03:37:50.175599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.175631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.175643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:02.685 [2024-11-21 03:37:50.175652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.685 [2024-11-21 03:37:50.175662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.176198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.176240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:02.685 [2024-11-21 03:37:50.176258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:29:02.685 [2024-11-21 03:37:50.176273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.176325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.176342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:02.685 [2024-11-21 03:37:50.176351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:02.685 [2024-11-21 03:37:50.176362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.184736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.184788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:02.685 [2024-11-21 03:37:50.184799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.351 ms 00:29:02.685 [2024-11-21 03:37:50.184810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.194577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:02.685 [2024-11-21 03:37:50.195847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.195892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:02.685 [2024-11-21 03:37:50.195933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.899 ms 00:29:02.685 [2024-11-21 03:37:50.195943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.227373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.685 [2024-11-21 03:37:50.227443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:02.685 [2024-11-21 03:37:50.227469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.390 ms 00:29:02.685 [2024-11-21 03:37:50.227479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.685 [2024-11-21 03:37:50.227595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.686 [2024-11-21 03:37:50.227607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:02.686 [2024-11-21 03:37:50.227621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:02.686 [2024-11-21 03:37:50.227630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.686 [2024-11-21 03:37:50.233547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.686 [2024-11-21 03:37:50.233601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:02.686 [2024-11-21 03:37:50.233620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.882 ms 00:29:02.686 [2024-11-21 03:37:50.233629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.686 [2024-11-21 03:37:50.239463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.686 [2024-11-21 03:37:50.239519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:02.686 [2024-11-21 03:37:50.239533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.773 ms 00:29:02.686 [2024-11-21 03:37:50.239541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.686 [2024-11-21 03:37:50.239872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.686 [2024-11-21 03:37:50.239886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:02.686 [2024-11-21 03:37:50.239936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:29:02.686 [2024-11-21 03:37:50.239945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.290753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.290812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:02.947 [2024-11-21 03:37:50.290831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.758 ms 00:29:02.947 [2024-11-21 03:37:50.290840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.298611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.298668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:02.947 [2024-11-21 03:37:50.298683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.656 ms 00:29:02.947 [2024-11-21 03:37:50.298692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.305477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.305525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:02.947 [2024-11-21 03:37:50.305538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.726 ms 00:29:02.947 [2024-11-21 03:37:50.305546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.312170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.312223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:02.947 [2024-11-21 03:37:50.312240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.562 ms 00:29:02.947 [2024-11-21 03:37:50.312249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.312305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.312315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:02.947 [2024-11-21 03:37:50.312327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:02.947 [2024-11-21 03:37:50.312335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.312455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.947 [2024-11-21 03:37:50.312469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:02.947 [2024-11-21 03:37:50.312483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:02.947 [2024-11-21 03:37:50.312492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.947 [2024-11-21 03:37:50.313596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3477.326 ms, result 0 00:29:02.947 { 00:29:02.947 "name": "ftl", 00:29:02.947 "uuid": "1278155f-47f9-4744-82dc-0c1ed023fe11" 00:29:02.947 } 00:29:02.947 03:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:03.209 [2024-11-21 03:37:50.528805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.209 03:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:03.209 03:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:03.470 [2024-11-21 03:37:50.957277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:03.470 03:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:03.732 [2024-11-21 03:37:51.161709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:03.732 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:03.994 Fill FTL, iteration 1 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95354 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95354 /var/tmp/spdk.tgt.sock 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95354 ']' 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:03.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.994 03:37:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.254 [2024-11-21 03:37:51.620483] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:04.254 [2024-11-21 03:37:51.620637] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95354 ] 00:29:04.254 [2024-11-21 03:37:51.757665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:04.254 [2024-11-21 03:37:51.788658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.515 [2024-11-21 03:37:51.817492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.086 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.086 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:05.086 03:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:05.347 ftln1 00:29:05.347 03:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:05.347 03:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95354 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95354 ']' 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95354 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95354 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.609 killing process with pid 95354 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95354' 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95354 00:29:05.609 03:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95354 00:29:05.870 03:37:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:05.870 03:37:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:05.870 [2024-11-21 03:37:53.358689] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:05.870 [2024-11-21 03:37:53.358833] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95384 ] 00:29:06.131 [2024-11-21 03:37:53.494485] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:06.131 [2024-11-21 03:37:53.524190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.131 [2024-11-21 03:37:53.552730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.519  [2024-11-21T03:37:56.029Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-21T03:37:56.973Z] Copying: 385/1024 [MB] (211 MBps) [2024-11-21T03:37:57.917Z] Copying: 632/1024 [MB] (247 MBps) [2024-11-21T03:37:58.490Z] Copying: 881/1024 [MB] (249 MBps) [2024-11-21T03:37:58.490Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:29:10.925 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:11.187 Calculate MD5 checksum, iteration 1 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:11.187 03:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:11.187 [2024-11-21 03:37:58.552245] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:11.187 [2024-11-21 03:37:58.552367] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95437 ] 00:29:11.187 [2024-11-21 03:37:58.684723] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:11.187 [2024-11-21 03:37:58.711681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.187 [2024-11-21 03:37:58.728762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.574  [2024-11-21T03:38:00.710Z] Copying: 659/1024 [MB] (659 MBps) [2024-11-21T03:38:00.710Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:29:13.145 00:29:13.145 03:38:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:13.145 03:38:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:15.679 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:15.679 Fill FTL, iteration 2 00:29:15.679 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fa3153bec977f8518770fce2e07003e9 00:29:15.679 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:15.679 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:15.680 03:38:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:15.680 [2024-11-21 03:38:02.794847] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:15.680 [2024-11-21 03:38:02.795158] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95493 ] 00:29:15.680 [2024-11-21 03:38:02.927363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:15.680 [2024-11-21 03:38:02.956957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.680 [2024-11-21 03:38:02.975772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.614  [2024-11-21T03:38:05.562Z] Copying: 198/1024 [MB] (198 MBps) [2024-11-21T03:38:06.506Z] Copying: 417/1024 [MB] (219 MBps) [2024-11-21T03:38:07.448Z] Copying: 668/1024 [MB] (251 MBps) [2024-11-21T03:38:07.708Z] Copying: 921/1024 [MB] (253 MBps) [2024-11-21T03:38:07.708Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:29:20.143 00:29:20.405 Calculate MD5 checksum, iteration 2 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.405 03:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:20.405 [2024-11-21 03:38:07.771880] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:20.405 [2024-11-21 03:38:07.772006] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95540 ] 00:29:20.405 [2024-11-21 03:38:07.903017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:20.405 [2024-11-21 03:38:07.928366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.405 [2024-11-21 03:38:07.945973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.793  [2024-11-21T03:38:09.930Z] Copying: 681/1024 [MB] (681 MBps) [2024-11-21T03:38:13.285Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:29:25.720 00:29:25.720 03:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:25.720 03:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.627 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:27.627 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a899523399f3063ae8b7a8b8a80d3cc3 00:29:27.627 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:27.627 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:27.627 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:27.886 [2024-11-21 03:38:15.300077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.886 [2024-11-21 03:38:15.300127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:27.886 [2024-11-21 03:38:15.300140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:27.886 [2024-11-21 03:38:15.300151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.886 [2024-11-21 03:38:15.300169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.886 [2024-11-21 03:38:15.300176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:27.886 [2024-11-21 03:38:15.300184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:27.886 [2024-11-21 03:38:15.300196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.886 [2024-11-21 03:38:15.300212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.886 [2024-11-21 03:38:15.300218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:27.886 [2024-11-21 03:38:15.300227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:27.886 [2024-11-21 03:38:15.300234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.886 [2024-11-21 03:38:15.300292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:29:27.886 true 00:29:27.886 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.144 { 00:29:28.144 "name": "ftl", 00:29:28.144 "properties": [ 00:29:28.144 { 00:29:28.144 "name": "superblock_version", 00:29:28.144 "value": 5, 00:29:28.144 "read-only": true 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "name": "base_device", 00:29:28.144 "bands": [ 00:29:28.144 { 00:29:28.144 "id": 0, 00:29:28.144 "state": "FREE", 00:29:28.144 "validity": 0.0 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "id": 1, 00:29:28.144 "state": "FREE", 00:29:28.144 "validity": 0.0 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "id": 2, 00:29:28.144 "state": "FREE", 00:29:28.144 "validity": 0.0 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "id": 3, 00:29:28.144 "state": "FREE", 00:29:28.144 "validity": 0.0 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "id": 4, 00:29:28.144 "state": "FREE", 00:29:28.144 "validity": 0.0 00:29:28.144 }, 00:29:28.144 { 00:29:28.144 "id": 5, 00:29:28.144 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 6, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 7, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 8, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 9, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 10, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 11, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 12, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 13, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 14, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 15, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 16, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 17, 00:29:28.145 "state": "FREE", 00:29:28.145 "validity": 0.0 00:29:28.145 } 00:29:28.145 ], 00:29:28.145 "read-only": true 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "name": "cache_device", 00:29:28.145 "type": "bdev", 00:29:28.145 "chunks": [ 00:29:28.145 { 00:29:28.145 "id": 0, 00:29:28.145 "state": "INACTIVE", 00:29:28.145 "utilization": 0.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 1, 00:29:28.145 "state": "CLOSED", 00:29:28.145 "utilization": 1.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 2, 00:29:28.145 "state": "CLOSED", 00:29:28.145 "utilization": 1.0 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 3, 00:29:28.145 "state": "OPEN", 00:29:28.145 "utilization": 0.001953125 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "id": 4, 00:29:28.145 "state": "OPEN", 00:29:28.145 "utilization": 0.0 00:29:28.145 } 00:29:28.145 ], 00:29:28.145 "read-only": true 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "name": "verbose_mode", 00:29:28.145 "value": true, 00:29:28.145 "unit": "", 00:29:28.145 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:28.145 }, 00:29:28.145 { 00:29:28.145 "name": "prep_upgrade_on_shutdown", 00:29:28.145 "value": false, 00:29:28.145 "unit": "", 00:29:28.145 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:28.145 } 00:29:28.145 ] 00:29:28.145 } 00:29:28.145 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:28.403 [2024-11-21 03:38:15.708358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.403 [2024-11-21 03:38:15.708393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:28.403 [2024-11-21 03:38:15.708402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:28.404 [2024-11-21 03:38:15.708408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.404 [2024-11-21 03:38:15.708424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.404 [2024-11-21 03:38:15.708431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:28.404 [2024-11-21 03:38:15.708438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:28.404 [2024-11-21 03:38:15.708443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.404 [2024-11-21 03:38:15.708458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.404 [2024-11-21 03:38:15.708464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:28.404 [2024-11-21 03:38:15.708470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:28.404 [2024-11-21 03:38:15.708475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.404 [2024-11-21 03:38:15.708518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:29:28.404 true 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:28.404 03:38:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:28.662 [2024-11-21 03:38:16.128712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.662 [2024-11-21 03:38:16.128741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:28.662 [2024-11-21 03:38:16.128750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.662 [2024-11-21 03:38:16.128755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.662 [2024-11-21 03:38:16.128771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.662 [2024-11-21 03:38:16.128777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:28.662 [2024-11-21 03:38:16.128783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:28.662 [2024-11-21 03:38:16.128789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.662 [2024-11-21 03:38:16.128803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.662 [2024-11-21 03:38:16.128809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:28.662 [2024-11-21 03:38:16.128815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:28.662 [2024-11-21 03:38:16.128821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.662 [2024-11-21 03:38:16.128860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.137 ms, result 0 00:29:28.662 true 00:29:28.662 03:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.921 { 00:29:28.921 "name": "ftl", 00:29:28.921 "properties": [ 00:29:28.921 { 00:29:28.921 "name": "superblock_version", 00:29:28.921 "value": 5, 00:29:28.921 "read-only": true 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "name": "base_device", 00:29:28.921 "bands": [ 00:29:28.921 { 00:29:28.921 "id": 0, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 1, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 2, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 3, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 4, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 5, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 6, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 7, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 8, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 9, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 10, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 11, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 12, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 13, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 14, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 15, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 16, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 17, 00:29:28.921 "state": "FREE", 00:29:28.921 "validity": 0.0 00:29:28.921 } 00:29:28.921 ], 00:29:28.921 "read-only": true 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "name": "cache_device", 00:29:28.921 "type": "bdev", 00:29:28.921 "chunks": [ 00:29:28.921 { 00:29:28.921 "id": 0, 00:29:28.921 "state": "INACTIVE", 00:29:28.921 "utilization": 0.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 1, 00:29:28.921 "state": "CLOSED", 00:29:28.921 "utilization": 1.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 2, 00:29:28.921 "state": "CLOSED", 00:29:28.921 "utilization": 1.0 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 3, 00:29:28.921 "state": "OPEN", 00:29:28.921 "utilization": 0.001953125 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "id": 4, 00:29:28.921 "state": "OPEN", 00:29:28.921 "utilization": 0.0 00:29:28.921 } 00:29:28.921 ], 00:29:28.921 "read-only": true 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "name": "verbose_mode", 00:29:28.921 "value": true, 00:29:28.921 "unit": "", 00:29:28.921 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:28.921 }, 00:29:28.921 { 00:29:28.921 "name": "prep_upgrade_on_shutdown", 00:29:28.921 "value": true, 00:29:28.921 "unit": "", 00:29:28.921 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:28.921 } 00:29:28.921 ] 00:29:28.921 } 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95236 ]] 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95236 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95236 ']' 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95236 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95236 00:29:28.921 killing process with pid 95236 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95236' 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95236 00:29:28.921 03:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95236 00:29:29.181 [2024-11-21 03:38:16.496891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:29.181 [2024-11-21 03:38:16.502240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.181 [2024-11-21 03:38:16.502281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:29.181 [2024-11-21 03:38:16.502292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:29.181 [2024-11-21 03:38:16.502299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.181 [2024-11-21 03:38:16.502317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:29.181 [2024-11-21 03:38:16.502830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.181 [2024-11-21 03:38:16.502853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:29.181 [2024-11-21 03:38:16.502861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:29:29.181 [2024-11-21 03:38:16.502868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.372071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.372139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:37.307 [2024-11-21 03:38:24.372153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7869.145 ms 00:29:37.307 [2024-11-21 03:38:24.372161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.373617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.373644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:37.307 [2024-11-21 03:38:24.373653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.442 ms 00:29:37.307 [2024-11-21 03:38:24.373661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.377714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.378176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:37.307 [2024-11-21 03:38:24.378230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.726 ms 00:29:37.307 [2024-11-21 03:38:24.378275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.381940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.382019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:37.307 [2024-11-21 03:38:24.382052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.467 ms 00:29:37.307 [2024-11-21 03:38:24.382078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.386204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.386334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:37.307 [2024-11-21 03:38:24.386364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.039 ms 00:29:37.307 [2024-11-21 03:38:24.386400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.386602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.386621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:37.307 [2024-11-21 03:38:24.386630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.116 ms 00:29:37.307 [2024-11-21 03:38:24.386638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.389001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.389145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:37.307 [2024-11-21 03:38:24.389161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.336 ms 00:29:37.307 [2024-11-21 03:38:24.389179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.391691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.391735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:37.307 [2024-11-21 03:38:24.391747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.233 ms 00:29:37.307 [2024-11-21 03:38:24.391755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.393848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.393888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:37.307 [2024-11-21 03:38:24.393916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.054 ms 00:29:37.307 [2024-11-21 03:38:24.393924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.395664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.307 [2024-11-21 03:38:24.395701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:37.307 [2024-11-21 03:38:24.395710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.674 ms 00:29:37.307 [2024-11-21 03:38:24.395718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.307 [2024-11-21 03:38:24.395751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:37.307 [2024-11-21 03:38:24.395766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:37.307 [2024-11-21 03:38:24.395777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:37.307 [2024-11-21 03:38:24.395785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:37.307 [2024-11-21 03:38:24.395793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:37.307 [2024-11-21 03:38:24.395917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:37.308 [2024-11-21 03:38:24.395927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:37.308 [2024-11-21 03:38:24.395935] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1278155f-47f9-4744-82dc-0c1ed023fe11 00:29:37.308 [2024-11-21 03:38:24.395944] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:37.308 [2024-11-21 03:38:24.395957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:37.308 [2024-11-21 03:38:24.395966] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:37.308 [2024-11-21 03:38:24.395974] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:37.308 [2024-11-21 03:38:24.395981] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:37.308 [2024-11-21 03:38:24.395990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:37.308 [2024-11-21 03:38:24.395998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:37.308 [2024-11-21 03:38:24.396007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:37.308 [2024-11-21 03:38:24.396014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:37.308 [2024-11-21 03:38:24.396025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.308 [2024-11-21 03:38:24.396034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:37.308 [2024-11-21 03:38:24.396043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:29:37.308 [2024-11-21 03:38:24.396052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.398444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.308 [2024-11-21 03:38:24.398572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:37.308 [2024-11-21 03:38:24.398632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.374 ms 00:29:37.308 [2024-11-21 03:38:24.398657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.398779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.308 [2024-11-21 03:38:24.398805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:37.308 [2024-11-21 03:38:24.398924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:29:37.308 [2024-11-21 03:38:24.398949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.406756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.406890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:37.308 [2024-11-21 03:38:24.406997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.407022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.407065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.407087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:37.308 [2024-11-21 03:38:24.407107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.407126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.407278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.407303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:37.308 [2024-11-21 03:38:24.407331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.407352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.407381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.407449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:37.308 [2024-11-21 03:38:24.407473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.407493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.421845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.422020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:37.308 [2024-11-21 03:38:24.422080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.422106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.433269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.433420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:37.308 [2024-11-21 03:38:24.433473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.433495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.433586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.433620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:37.308 [2024-11-21 03:38:24.433641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.433660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.433714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.433809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:37.308 [2024-11-21 03:38:24.433819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.433828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.433921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.433933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:37.308 [2024-11-21 03:38:24.433947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.433956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.433995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.434006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:37.308 [2024-11-21 03:38:24.434014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.434021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.434065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.434078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:37.308 [2024-11-21 03:38:24.434090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.434098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.434148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:37.308 [2024-11-21 03:38:24.434159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:37.308 [2024-11-21 03:38:24.434167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:37.308 [2024-11-21 03:38:24.434175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.308 [2024-11-21 03:38:24.434338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7931.996 ms, result 0 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95744 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95744 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95744 ']' 00:29:41.511 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.512 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.512 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.512 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.512 03:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:41.512 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.512 [2024-11-21 03:38:28.295201] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:41.512 [2024-11-21 03:38:28.296078] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95744 ] 00:29:41.512 [2024-11-21 03:38:28.433061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:41.512 [2024-11-21 03:38:28.459274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.512 [2024-11-21 03:38:28.499679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.512 [2024-11-21 03:38:28.917341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:41.512 [2024-11-21 03:38:28.921126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:41.512 [2024-11-21 03:38:29.070048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.512 [2024-11-21 03:38:29.070290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:41.512 [2024-11-21 03:38:29.070323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:41.512 [2024-11-21 03:38:29.070334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.512 [2024-11-21 03:38:29.070413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.512 [2024-11-21 03:38:29.070426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:41.512 [2024-11-21 03:38:29.070441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:29:41.512 [2024-11-21 03:38:29.070451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.512 [2024-11-21 03:38:29.070482] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:41.512 [2024-11-21 03:38:29.070770] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:41.512 [2024-11-21 03:38:29.070790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.512 [2024-11-21 03:38:29.070801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:41.512 [2024-11-21 03:38:29.070810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:29:41.512 [2024-11-21 03:38:29.070823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.512 [2024-11-21 03:38:29.073118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:41.773 [2024-11-21 03:38:29.077786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.773 [2024-11-21 03:38:29.077846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:41.773 [2024-11-21 03:38:29.077857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.670 ms 00:29:41.773 [2024-11-21 03:38:29.077866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.773 [2024-11-21 03:38:29.077975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.077988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:41.774 [2024-11-21 03:38:29.077999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:41.774 [2024-11-21 03:38:29.078007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.089433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.089476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:41.774 [2024-11-21 03:38:29.089488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.371 ms 00:29:41.774 [2024-11-21 03:38:29.089503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.089552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.089562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:41.774 [2024-11-21 03:38:29.089571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:41.774 [2024-11-21 03:38:29.089579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.089649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.089668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:41.774 [2024-11-21 03:38:29.089681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:41.774 [2024-11-21 03:38:29.089695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.089721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:41.774 [2024-11-21 03:38:29.092559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.092767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:41.774 [2024-11-21 03:38:29.092785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.844 ms 00:29:41.774 [2024-11-21 03:38:29.092794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.092838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.092852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:41.774 [2024-11-21 03:38:29.092862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:41.774 [2024-11-21 03:38:29.092874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.092923] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:41.774 [2024-11-21 03:38:29.092952] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:41.774 [2024-11-21 03:38:29.092997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:41.774 [2024-11-21 03:38:29.093021] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:41.774 [2024-11-21 03:38:29.093133] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:41.774 [2024-11-21 03:38:29.093147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:41.774 [2024-11-21 03:38:29.093159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:41.774 [2024-11-21 03:38:29.093170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093181] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:41.774 [2024-11-21 03:38:29.093198] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:41.774 [2024-11-21 03:38:29.093207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:41.774 [2024-11-21 03:38:29.093215] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:41.774 [2024-11-21 03:38:29.093225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.093240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:41.774 [2024-11-21 03:38:29.093248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:29:41.774 [2024-11-21 03:38:29.093256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.093342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.774 [2024-11-21 03:38:29.093353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:41.774 [2024-11-21 03:38:29.093370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:29:41.774 [2024-11-21 03:38:29.093379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.774 [2024-11-21 03:38:29.093488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:41.774 [2024-11-21 03:38:29.093501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:41.774 [2024-11-21 03:38:29.093519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:41.774 [2024-11-21 03:38:29.093546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:41.774 [2024-11-21 03:38:29.093563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:41.774 [2024-11-21 03:38:29.093575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:41.774 [2024-11-21 03:38:29.093584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:41.774 [2024-11-21 03:38:29.093601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:41.774 [2024-11-21 03:38:29.093609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:41.774 [2024-11-21 03:38:29.093641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:41.774 [2024-11-21 03:38:29.093651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:41.774 [2024-11-21 03:38:29.093667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:41.774 [2024-11-21 03:38:29.093682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:41.774 [2024-11-21 03:38:29.093700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:41.774 [2024-11-21 03:38:29.093721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:41.774 [2024-11-21 03:38:29.093742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:41.774 [2024-11-21 03:38:29.093763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:41.774 [2024-11-21 03:38:29.093786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:41.774 [2024-11-21 03:38:29.093808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:41.774 [2024-11-21 03:38:29.093828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:41.774 [2024-11-21 03:38:29.093847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:41.774 [2024-11-21 03:38:29.093854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093861] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:41.774 [2024-11-21 03:38:29.093870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:41.774 [2024-11-21 03:38:29.093878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.774 [2024-11-21 03:38:29.093917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:41.774 [2024-11-21 03:38:29.093926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:41.774 [2024-11-21 03:38:29.093934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:41.774 [2024-11-21 03:38:29.093942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:41.774 [2024-11-21 03:38:29.093949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:41.774 [2024-11-21 03:38:29.093957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:41.774 [2024-11-21 03:38:29.093966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:41.774 [2024-11-21 03:38:29.093976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.774 [2024-11-21 03:38:29.093987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:41.774 [2024-11-21 03:38:29.093997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:41.774 [2024-11-21 03:38:29.094005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:41.774 [2024-11-21 03:38:29.094012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:41.775 [2024-11-21 03:38:29.094023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:41.775 [2024-11-21 03:38:29.094031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:41.775 [2024-11-21 03:38:29.094039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:41.775 [2024-11-21 03:38:29.094059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:41.775 [2024-11-21 03:38:29.094112] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:41.775 [2024-11-21 03:38:29.094120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:41.775 [2024-11-21 03:38:29.094135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:41.775 [2024-11-21 03:38:29.094144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:41.775 [2024-11-21 03:38:29.094151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:41.775 [2024-11-21 03:38:29.094159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.775 [2024-11-21 03:38:29.094169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:41.775 [2024-11-21 03:38:29.094177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:29:41.775 [2024-11-21 03:38:29.094190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.775 [2024-11-21 03:38:29.094254] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:41.775 [2024-11-21 03:38:29.094280] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:45.973 [2024-11-21 03:38:32.856980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.857047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:45.973 [2024-11-21 03:38:32.857064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3762.708 ms 00:29:45.973 [2024-11-21 03:38:32.857080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.868137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.868182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:45.973 [2024-11-21 03:38:32.868196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.966 ms 00:29:45.973 [2024-11-21 03:38:32.868205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.868266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.868277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:45.973 [2024-11-21 03:38:32.868286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:45.973 [2024-11-21 03:38:32.868299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.879482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.879519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:45.973 [2024-11-21 03:38:32.879530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.113 ms 00:29:45.973 [2024-11-21 03:38:32.879537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.879571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.879580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:45.973 [2024-11-21 03:38:32.879592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:45.973 [2024-11-21 03:38:32.879600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.880082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.880102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:45.973 [2024-11-21 03:38:32.880113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:29:45.973 [2024-11-21 03:38:32.880122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.880172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.880182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:45.973 [2024-11-21 03:38:32.880191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:45.973 [2024-11-21 03:38:32.880203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.887589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.887624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:45.973 [2024-11-21 03:38:32.887634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.363 ms 00:29:45.973 [2024-11-21 03:38:32.887642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.890947] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:45.973 [2024-11-21 03:38:32.890986] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:45.973 [2024-11-21 03:38:32.891001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.891010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:45.973 [2024-11-21 03:38:32.891019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.270 ms 00:29:45.973 [2024-11-21 03:38:32.891026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.895080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.895118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:45.973 [2024-11-21 03:38:32.895133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.013 ms 00:29:45.973 [2024-11-21 03:38:32.895141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.896847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.896880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:45.973 [2024-11-21 03:38:32.896889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:29:45.973 [2024-11-21 03:38:32.896912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.898644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.898849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:45.973 [2024-11-21 03:38:32.898866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.694 ms 00:29:45.973 [2024-11-21 03:38:32.898874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.899239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.899253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:45.973 [2024-11-21 03:38:32.899264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:29:45.973 [2024-11-21 03:38:32.899273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.930208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.930446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:45.973 [2024-11-21 03:38:32.930468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.912 ms 00:29:45.973 [2024-11-21 03:38:32.930487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.938605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:45.973 [2024-11-21 03:38:32.939462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.939492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:45.973 [2024-11-21 03:38:32.939503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.860 ms 00:29:45.973 [2024-11-21 03:38:32.939511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.939587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.939602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:45.973 [2024-11-21 03:38:32.939611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:45.973 [2024-11-21 03:38:32.939619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.939667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.939679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:45.973 [2024-11-21 03:38:32.939691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:45.973 [2024-11-21 03:38:32.939699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.973 [2024-11-21 03:38:32.939722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.973 [2024-11-21 03:38:32.939731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:45.974 [2024-11-21 03:38:32.939739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:45.974 [2024-11-21 03:38:32.939748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:32.939783] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:45.974 [2024-11-21 03:38:32.939793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:32.939801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:45.974 [2024-11-21 03:38:32.939810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:45.974 [2024-11-21 03:38:32.939827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:32.943955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:32.943988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:45.974 [2024-11-21 03:38:32.943998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.107 ms 00:29:45.974 [2024-11-21 03:38:32.944007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:32.944083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:32.944093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:45.974 [2024-11-21 03:38:32.944101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:45.974 [2024-11-21 03:38:32.944116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:32.945169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3874.654 ms, result 0 00:29:45.974 [2024-11-21 03:38:32.960310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.974 [2024-11-21 03:38:32.976313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:45.974 [2024-11-21 03:38:32.984443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:45.974 [2024-11-21 03:38:33.220514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:33.220568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:45.974 [2024-11-21 03:38:33.220581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:45.974 [2024-11-21 03:38:33.220591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:33.220616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:33.220626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:45.974 [2024-11-21 03:38:33.220639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:45.974 [2024-11-21 03:38:33.220649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:33.220670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.974 [2024-11-21 03:38:33.220680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:45.974 [2024-11-21 03:38:33.220689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:45.974 [2024-11-21 03:38:33.220697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.974 [2024-11-21 03:38:33.220756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.234 ms, result 0 00:29:45.974 true 00:29:45.974 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:45.974 { 00:29:45.974 "name": "ftl", 00:29:45.974 "properties": [ 00:29:45.974 { 00:29:45.974 "name": "superblock_version", 00:29:45.974 "value": 5, 00:29:45.974 "read-only": true 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "name": "base_device", 00:29:45.974 "bands": [ 00:29:45.974 { 00:29:45.974 "id": 0, 00:29:45.974 "state": "CLOSED", 00:29:45.974 "validity": 1.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 1, 00:29:45.974 "state": "CLOSED", 00:29:45.974 "validity": 1.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 2, 00:29:45.974 "state": "CLOSED", 00:29:45.974 "validity": 0.007843137254901933 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 3, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 4, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 5, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 6, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 7, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 8, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 9, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 10, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 11, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 12, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 13, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 14, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 15, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 16, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 17, 00:29:45.974 "state": "FREE", 00:29:45.974 "validity": 0.0 00:29:45.974 } 00:29:45.974 ], 00:29:45.974 "read-only": true 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "name": "cache_device", 00:29:45.974 "type": "bdev", 00:29:45.974 "chunks": [ 00:29:45.974 { 00:29:45.974 "id": 0, 00:29:45.974 "state": "INACTIVE", 00:29:45.974 "utilization": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 1, 00:29:45.974 "state": "OPEN", 00:29:45.974 "utilization": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 2, 00:29:45.974 "state": "OPEN", 00:29:45.974 "utilization": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 3, 00:29:45.974 "state": "FREE", 00:29:45.974 "utilization": 0.0 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "id": 4, 00:29:45.974 "state": "FREE", 00:29:45.974 "utilization": 0.0 00:29:45.974 } 00:29:45.974 ], 00:29:45.974 "read-only": true 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "name": "verbose_mode", 00:29:45.974 "value": true, 00:29:45.974 "unit": "", 00:29:45.974 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:45.974 }, 00:29:45.974 { 00:29:45.974 "name": "prep_upgrade_on_shutdown", 00:29:45.974 "value": false, 00:29:45.974 "unit": "", 00:29:45.975 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:45.975 } 00:29:45.975 ] 00:29:45.975 } 00:29:45.975 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:45.975 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:45.975 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:46.236 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:46.236 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:46.236 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:46.236 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:46.236 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:46.497 Validate MD5 checksum, iteration 1 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:46.497 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:46.497 [2024-11-21 03:38:33.982460] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:46.497 [2024-11-21 03:38:33.982592] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95816 ] 00:29:46.756 [2024-11-21 03:38:34.118744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:46.756 [2024-11-21 03:38:34.147812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.756 [2024-11-21 03:38:34.176431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.139  [2024-11-21T03:38:36.643Z] Copying: 510/1024 [MB] (510 MBps) [2024-11-21T03:38:37.211Z] Copying: 1024/1024 [MB] (average 576 MBps) 00:29:49.646 00:29:49.646 03:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:49.646 03:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:52.177 Validate MD5 checksum, iteration 2 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa3153bec977f8518770fce2e07003e9 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa3153bec977f8518770fce2e07003e9 != \f\a\3\1\5\3\b\e\c\9\7\7\f\8\5\1\8\7\7\0\f\c\e\2\e\0\7\0\0\3\e\9 ]] 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:52.177 03:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:52.177 [2024-11-21 03:38:39.184113] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:52.177 [2024-11-21 03:38:39.184802] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95876 ] 00:29:52.177 [2024-11-21 03:38:39.316398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:52.177 [2024-11-21 03:38:39.347905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.177 [2024-11-21 03:38:39.366722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.560  [2024-11-21T03:38:41.696Z] Copying: 618/1024 [MB] (618 MBps) [2024-11-21T03:38:42.268Z] Copying: 1024/1024 [MB] (average 594 MBps) 00:29:54.703 00:29:54.703 03:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:54.703 03:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a899523399f3063ae8b7a8b8a80d3cc3 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a899523399f3063ae8b7a8b8a80d3cc3 != \a\8\9\9\5\2\3\3\9\9\f\3\0\6\3\a\e\8\b\7\a\8\b\8\a\8\0\d\3\c\c\3 ]] 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95744 ]] 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95744 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95932 00:29:56.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95932 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95932 ']' 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.610 03:38:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:56.610 [2024-11-21 03:38:43.976097] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:56.610 [2024-11-21 03:38:43.976208] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95932 ] 00:29:56.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 95744 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:56.610 [2024-11-21 03:38:44.109165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:56.610 [2024-11-21 03:38:44.134395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.610 [2024-11-21 03:38:44.159809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.179 [2024-11-21 03:38:44.461703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:57.179 [2024-11-21 03:38:44.461759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:57.179 [2024-11-21 03:38:44.609836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.610018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:57.179 [2024-11-21 03:38:44.610040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:57.179 [2024-11-21 03:38:44.610048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.610097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.610106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:57.179 [2024-11-21 03:38:44.610115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:57.179 [2024-11-21 03:38:44.610121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.610142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:57.179 [2024-11-21 03:38:44.610341] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:57.179 [2024-11-21 03:38:44.610354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.610361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:57.179 [2024-11-21 03:38:44.610367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:29:57.179 [2024-11-21 03:38:44.610373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.610601] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:57.179 [2024-11-21 03:38:44.614752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.614787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:57.179 [2024-11-21 03:38:44.614796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.151 ms 00:29:57.179 [2024-11-21 03:38:44.614803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.615749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.615772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:57.179 [2024-11-21 03:38:44.615786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:57.179 [2024-11-21 03:38:44.615792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.616024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:57.179 [2024-11-21 03:38:44.616030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:29:57.179 [2024-11-21 03:38:44.616036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.616075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:57.179 [2024-11-21 03:38:44.616081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:57.179 [2024-11-21 03:38:44.616087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.616118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:57.179 [2024-11-21 03:38:44.616125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:57.179 [2024-11-21 03:38:44.616130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:57.179 [2024-11-21 03:38:44.616830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.616844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:57.179 [2024-11-21 03:38:44.616851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:29:57.179 [2024-11-21 03:38:44.616857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.616888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:57.179 [2024-11-21 03:38:44.616894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:57.179 [2024-11-21 03:38:44.616915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.616933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:57.179 [2024-11-21 03:38:44.616950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:57.179 [2024-11-21 03:38:44.616980] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:57.179 [2024-11-21 03:38:44.616992] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:57.179 [2024-11-21 03:38:44.617076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:57.179 [2024-11-21 03:38:44.617084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:57.179 [2024-11-21 03:38:44.617095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:57.179 [2024-11-21 03:38:44.617103] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:57.179 [2024-11-21 03:38:44.617111] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:57.179 [2024-11-21 03:38:44.617118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:57.179 [2024-11-21 03:38:44.617124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:57.179 [2024-11-21 03:38:44.617129] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:57.179 [2024-11-21 03:38:44.617135] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:57.179 [2024-11-21 03:38:44.617141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.179 [2024-11-21 03:38:44.617147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:57.179 [2024-11-21 03:38:44.617155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:29:57.179 [2024-11-21 03:38:44.617160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.179 [2024-11-21 03:38:44.617225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.617233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:57.180 [2024-11-21 03:38:44.617238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:57.180 [2024-11-21 03:38:44.617244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.617319] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:57.180 [2024-11-21 03:38:44.617327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:57.180 [2024-11-21 03:38:44.617333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:57.180 [2024-11-21 03:38:44.617354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:57.180 [2024-11-21 03:38:44.617364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:57.180 [2024-11-21 03:38:44.617370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:57.180 [2024-11-21 03:38:44.617375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:57.180 [2024-11-21 03:38:44.617385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:57.180 [2024-11-21 03:38:44.617389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:57.180 [2024-11-21 03:38:44.617404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:57.180 [2024-11-21 03:38:44.617409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:57.180 [2024-11-21 03:38:44.617424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:57.180 [2024-11-21 03:38:44.617428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:57.180 [2024-11-21 03:38:44.617439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:57.180 [2024-11-21 03:38:44.617454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:57.180 [2024-11-21 03:38:44.617469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:57.180 [2024-11-21 03:38:44.617485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:57.180 [2024-11-21 03:38:44.617500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:57.180 [2024-11-21 03:38:44.617522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:57.180 [2024-11-21 03:38:44.617536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:57.180 [2024-11-21 03:38:44.617550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:57.180 [2024-11-21 03:38:44.617555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:57.180 [2024-11-21 03:38:44.617565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:57.180 [2024-11-21 03:38:44.617660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:57.180 [2024-11-21 03:38:44.617674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:57.180 [2024-11-21 03:38:44.617679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:57.180 [2024-11-21 03:38:44.617685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:57.180 [2024-11-21 03:38:44.617690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:57.180 [2024-11-21 03:38:44.617697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:57.180 [2024-11-21 03:38:44.617702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:57.180 [2024-11-21 03:38:44.617709] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:57.180 [2024-11-21 03:38:44.617716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:57.180 [2024-11-21 03:38:44.617728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:57.180 [2024-11-21 03:38:44.617744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:57.180 [2024-11-21 03:38:44.617750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:57.180 [2024-11-21 03:38:44.617757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:57.180 [2024-11-21 03:38:44.617763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:57.180 [2024-11-21 03:38:44.617801] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:57.180 [2024-11-21 03:38:44.617807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.180 [2024-11-21 03:38:44.617819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:57.180 [2024-11-21 03:38:44.617825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:57.180 [2024-11-21 03:38:44.617830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:57.180 [2024-11-21 03:38:44.617836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.617843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:57.180 [2024-11-21 03:38:44.617853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:29:57.180 [2024-11-21 03:38:44.617858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.626325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.626353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:57.180 [2024-11-21 03:38:44.626362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.408 ms 00:29:57.180 [2024-11-21 03:38:44.626371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.626400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.626407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:57.180 [2024-11-21 03:38:44.626414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:57.180 [2024-11-21 03:38:44.626423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.636475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.636506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:57.180 [2024-11-21 03:38:44.636515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.007 ms 00:29:57.180 [2024-11-21 03:38:44.636521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.636541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.636547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:57.180 [2024-11-21 03:38:44.636557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:57.180 [2024-11-21 03:38:44.636566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.180 [2024-11-21 03:38:44.636635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.180 [2024-11-21 03:38:44.636644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:57.180 [2024-11-21 03:38:44.636654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:57.180 [2024-11-21 03:38:44.636659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.636692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.636699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:57.181 [2024-11-21 03:38:44.636705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:57.181 [2024-11-21 03:38:44.636713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.643295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.643320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:57.181 [2024-11-21 03:38:44.643333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.564 ms 00:29:57.181 [2024-11-21 03:38:44.643339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.643404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.643412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:57.181 [2024-11-21 03:38:44.643421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:57.181 [2024-11-21 03:38:44.643427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.657099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.657139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:57.181 [2024-11-21 03:38:44.657150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.656 ms 00:29:57.181 [2024-11-21 03:38:44.657156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.658303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.658443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:57.181 [2024-11-21 03:38:44.658461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:29:57.181 [2024-11-21 03:38:44.658467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.677626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.677776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:57.181 [2024-11-21 03:38:44.677794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.136 ms 00:29:57.181 [2024-11-21 03:38:44.677800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.677927] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:57.181 [2024-11-21 03:38:44.678023] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:57.181 [2024-11-21 03:38:44.678113] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:57.181 [2024-11-21 03:38:44.678202] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:57.181 [2024-11-21 03:38:44.678210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.678216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:57.181 [2024-11-21 03:38:44.678227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:29:57.181 [2024-11-21 03:38:44.678233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.678263] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:57.181 [2024-11-21 03:38:44.678272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.678278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:57.181 [2024-11-21 03:38:44.678292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:57.181 [2024-11-21 03:38:44.678298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.681170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.681202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:57.181 [2024-11-21 03:38:44.681210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.851 ms 00:29:57.181 [2024-11-21 03:38:44.681219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.681739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.681759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:57.181 [2024-11-21 03:38:44.681768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:57.181 [2024-11-21 03:38:44.681774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.181 [2024-11-21 03:38:44.681817] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:57.181 [2024-11-21 03:38:44.681994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.181 [2024-11-21 03:38:44.682002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:57.181 [2024-11-21 03:38:44.682013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:29:57.181 [2024-11-21 03:38:44.682019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.749 [2024-11-21 03:38:45.199061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.749 [2024-11-21 03:38:45.199139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:57.749 [2024-11-21 03:38:45.199153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 516.786 ms 00:29:57.749 [2024-11-21 03:38:45.199160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.749 [2024-11-21 03:38:45.200711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.749 [2024-11-21 03:38:45.200742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:57.749 [2024-11-21 03:38:45.200756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.082 ms 00:29:57.749 [2024-11-21 03:38:45.200768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.749 [2024-11-21 03:38:45.201202] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:57.749 [2024-11-21 03:38:45.201223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.749 [2024-11-21 03:38:45.201231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:57.749 [2024-11-21 03:38:45.201239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:29:57.749 [2024-11-21 03:38:45.201252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.749 [2024-11-21 03:38:45.201279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.749 [2024-11-21 03:38:45.201290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:57.749 [2024-11-21 03:38:45.201298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:57.749 [2024-11-21 03:38:45.201311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.749 [2024-11-21 03:38:45.201339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 519.520 ms, result 0 00:29:57.749 [2024-11-21 03:38:45.201372] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:57.749 [2024-11-21 03:38:45.201451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.749 [2024-11-21 03:38:45.201460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:57.749 [2024-11-21 03:38:45.201468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:29:57.749 [2024-11-21 03:38:45.201475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.883361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.883486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:58.693 [2024-11-21 03:38:45.883507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 681.482 ms 00:29:58.693 [2024-11-21 03:38:45.883516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.885882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.885953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:58.693 [2024-11-21 03:38:45.885967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.615 ms 00:29:58.693 [2024-11-21 03:38:45.885976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.886539] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:58.693 [2024-11-21 03:38:45.886584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.886595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:58.693 [2024-11-21 03:38:45.886606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:29:58.693 [2024-11-21 03:38:45.886614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.886653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.886665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:58.693 [2024-11-21 03:38:45.886675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:58.693 [2024-11-21 03:38:45.886684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.886723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 685.338 ms, result 0 00:29:58.693 [2024-11-21 03:38:45.886780] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:58.693 [2024-11-21 03:38:45.886792] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:58.693 [2024-11-21 03:38:45.886815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.886825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:58.693 [2024-11-21 03:38:45.886836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1205.008 ms 00:29:58.693 [2024-11-21 03:38:45.886850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.886884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.886894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:58.693 [2024-11-21 03:38:45.886923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:58.693 [2024-11-21 03:38:45.886932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.897458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:58.693 [2024-11-21 03:38:45.897882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.897923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:58.693 [2024-11-21 03:38:45.897942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.933 ms 00:29:58.693 [2024-11-21 03:38:45.897951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.898733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.898762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:58.693 [2024-11-21 03:38:45.898773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:29:58.693 [2024-11-21 03:38:45.898782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:58.693 [2024-11-21 03:38:45.901116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.272 ms 00:29:58.693 [2024-11-21 03:38:45.901125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:58.693 [2024-11-21 03:38:45.901204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:58.693 [2024-11-21 03:38:45.901212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:58.693 [2024-11-21 03:38:45.901362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:58.693 [2024-11-21 03:38:45.901371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:58.693 [2024-11-21 03:38:45.901421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:58.693 [2024-11-21 03:38:45.901439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:58.693 [2024-11-21 03:38:45.901490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:58.693 [2024-11-21 03:38:45.901507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:58.693 [2024-11-21 03:38:45.901520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.693 [2024-11-21 03:38:45.901578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.693 [2024-11-21 03:38:45.901589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:58.693 [2024-11-21 03:38:45.901598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:58.694 [2024-11-21 03:38:45.901607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.694 [2024-11-21 03:38:45.903241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1292.707 ms, result 0 00:29:58.694 [2024-11-21 03:38:45.918702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.694 [2024-11-21 03:38:45.934711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:58.694 [2024-11-21 03:38:45.942894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:59.264 Validate MD5 checksum, iteration 1 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.264 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.264 [2024-11-21 03:38:46.596440] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:29:59.264 [2024-11-21 03:38:46.596579] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95967 ] 00:29:59.264 [2024-11-21 03:38:46.732187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:59.264 [2024-11-21 03:38:46.763074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.264 [2024-11-21 03:38:46.782349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.645  [2024-11-21T03:38:49.152Z] Copying: 532/1024 [MB] (532 MBps) [2024-11-21T03:38:49.724Z] Copying: 1024/1024 [MB] (average 540 MBps) 00:30:02.159 00:30:02.159 03:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:02.159 03:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa3153bec977f8518770fce2e07003e9 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa3153bec977f8518770fce2e07003e9 != \f\a\3\1\5\3\b\e\c\9\7\7\f\8\5\1\8\7\7\0\f\c\e\2\e\0\7\0\0\3\e\9 ]] 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:04.075 Validate MD5 checksum, iteration 2 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.075 03:38:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.337 [2024-11-21 03:38:51.688299] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:30:04.337 [2024-11-21 03:38:51.688409] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96023 ] 00:30:04.337 [2024-11-21 03:38:51.819512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:04.337 [2024-11-21 03:38:51.847459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.337 [2024-11-21 03:38:51.863672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.781  [2024-11-21T03:38:54.289Z] Copying: 529/1024 [MB] (529 MBps) [2024-11-21T03:38:54.549Z] Copying: 1024/1024 [MB] (average 535 MBps) 00:30:06.984 00:30:06.984 03:38:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:06.984 03:38:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a899523399f3063ae8b7a8b8a80d3cc3 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a899523399f3063ae8b7a8b8a80d3cc3 != \a\8\9\9\5\2\3\3\9\9\f\3\0\6\3\a\e\8\b\7\a\8\b\8\a\8\0\d\3\c\c\3 ]] 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95932 ]] 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95932 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95932 ']' 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95932 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:09.513 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95932 00:30:09.514 killing process with pid 95932 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95932' 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95932 00:30:09.514 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95932 00:30:09.514 [2024-11-21 03:38:56.711672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:09.514 [2024-11-21 03:38:56.718217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.718251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:09.514 [2024-11-21 03:38:56.718263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:09.514 [2024-11-21 03:38:56.718270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.718287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:09.514 [2024-11-21 03:38:56.718807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.718830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:09.514 [2024-11-21 03:38:56.718842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:30:09.514 [2024-11-21 03:38:56.718848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.719044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.719059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:09.514 [2024-11-21 03:38:56.719067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:30:09.514 [2024-11-21 03:38:56.719073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.720218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.720239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:09.514 [2024-11-21 03:38:56.720247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.131 ms 00:30:09.514 [2024-11-21 03:38:56.720257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.721124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.721145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:09.514 [2024-11-21 03:38:56.721153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:30:09.514 [2024-11-21 03:38:56.721160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.722838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.722868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:09.514 [2024-11-21 03:38:56.722876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.647 ms 00:30:09.514 [2024-11-21 03:38:56.722887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.724220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.724246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:09.514 [2024-11-21 03:38:56.724254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.280 ms 00:30:09.514 [2024-11-21 03:38:56.724262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.724322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.724330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:09.514 [2024-11-21 03:38:56.724337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:09.514 [2024-11-21 03:38:56.724347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.725651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.725676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:09.514 [2024-11-21 03:38:56.725683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:30:09.514 [2024-11-21 03:38:56.725688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.726976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.727001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:09.514 [2024-11-21 03:38:56.727007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.263 ms 00:30:09.514 [2024-11-21 03:38:56.727013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.728059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.728087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:09.514 [2024-11-21 03:38:56.728094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.021 ms 00:30:09.514 [2024-11-21 03:38:56.728099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.729108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.729132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:09.514 [2024-11-21 03:38:56.729140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.959 ms 00:30:09.514 [2024-11-21 03:38:56.729145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.729169] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:09.514 [2024-11-21 03:38:56.729180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:09.514 [2024-11-21 03:38:56.729189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:09.514 [2024-11-21 03:38:56.729196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:09.514 [2024-11-21 03:38:56.729203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.514 [2024-11-21 03:38:56.729291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:09.514 [2024-11-21 03:38:56.729297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1278155f-47f9-4744-82dc-0c1ed023fe11 00:30:09.514 [2024-11-21 03:38:56.729303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:09.514 [2024-11-21 03:38:56.729308] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:09.514 [2024-11-21 03:38:56.729314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:09.514 [2024-11-21 03:38:56.729320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:09.514 [2024-11-21 03:38:56.729326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:09.514 [2024-11-21 03:38:56.729332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:09.514 [2024-11-21 03:38:56.729337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:09.514 [2024-11-21 03:38:56.729343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:09.514 [2024-11-21 03:38:56.729352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:09.514 [2024-11-21 03:38:56.729359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.729369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:09.514 [2024-11-21 03:38:56.729376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:30:09.514 [2024-11-21 03:38:56.729381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.731059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.731082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:09.514 [2024-11-21 03:38:56.731089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.664 ms 00:30:09.514 [2024-11-21 03:38:56.731095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.731183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.514 [2024-11-21 03:38:56.731191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:09.514 [2024-11-21 03:38:56.731197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:09.514 [2024-11-21 03:38:56.731202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.737161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.514 [2024-11-21 03:38:56.737188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:09.514 [2024-11-21 03:38:56.737197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.514 [2024-11-21 03:38:56.737203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.737232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.514 [2024-11-21 03:38:56.737243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:09.514 [2024-11-21 03:38:56.737249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.514 [2024-11-21 03:38:56.737255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.514 [2024-11-21 03:38:56.737307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.514 [2024-11-21 03:38:56.737318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:09.515 [2024-11-21 03:38:56.737325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.737331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.737347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.737355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:09.515 [2024-11-21 03:38:56.737361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.737372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.748400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.748432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:09.515 [2024-11-21 03:38:56.748441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.748447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.756790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.756821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:09.515 [2024-11-21 03:38:56.756829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.756836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.756906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.756915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:09.515 [2024-11-21 03:38:56.756921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.756928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.756961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.756969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:09.515 [2024-11-21 03:38:56.756977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.756984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.757046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.757059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:09.515 [2024-11-21 03:38:56.757065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.757070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.757094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.757100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:09.515 [2024-11-21 03:38:56.757110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.757116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.757149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.757156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:09.515 [2024-11-21 03:38:56.757162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.757168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.757206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:09.515 [2024-11-21 03:38:56.757214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:09.515 [2024-11-21 03:38:56.757222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:09.515 [2024-11-21 03:38:56.757228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.515 [2024-11-21 03:38:56.757336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 39.088 ms, result 0 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:09.515 Remove shared memory files 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95744 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:09.515 00:30:09.515 real 1m13.622s 00:30:09.515 user 1m36.881s 00:30:09.515 sys 0m21.003s 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.515 ************************************ 00:30:09.515 END TEST ftl_upgrade_shutdown 00:30:09.515 ************************************ 00:30:09.515 03:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:09.515 03:38:56 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:09.515 03:38:56 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:09.515 03:38:56 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:09.515 03:38:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.515 03:38:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:09.515 ************************************ 00:30:09.515 START TEST ftl_restore_fast 00:30:09.515 ************************************ 00:30:09.515 03:38:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:09.515 * Looking for test storage... 00:30:09.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:09.515 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:09.515 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:09.515 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.775 --rc genhtml_branch_coverage=1 00:30:09.775 --rc genhtml_function_coverage=1 00:30:09.775 --rc genhtml_legend=1 00:30:09.775 --rc geninfo_all_blocks=1 00:30:09.775 --rc geninfo_unexecuted_blocks=1 00:30:09.775 00:30:09.775 ' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.775 --rc genhtml_branch_coverage=1 00:30:09.775 --rc genhtml_function_coverage=1 00:30:09.775 --rc genhtml_legend=1 00:30:09.775 --rc geninfo_all_blocks=1 00:30:09.775 --rc geninfo_unexecuted_blocks=1 00:30:09.775 00:30:09.775 ' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.775 --rc genhtml_branch_coverage=1 00:30:09.775 --rc genhtml_function_coverage=1 00:30:09.775 --rc genhtml_legend=1 00:30:09.775 --rc geninfo_all_blocks=1 00:30:09.775 --rc geninfo_unexecuted_blocks=1 00:30:09.775 00:30:09.775 ' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:09.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.775 --rc genhtml_branch_coverage=1 00:30:09.775 --rc genhtml_function_coverage=1 00:30:09.775 --rc genhtml_legend=1 00:30:09.775 --rc geninfo_all_blocks=1 00:30:09.775 --rc geninfo_unexecuted_blocks=1 00:30:09.775 00:30:09.775 ' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:09.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Pdf0qExJLQ 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96156 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96156 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96156 ']' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.775 03:38:57 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:09.775 [2024-11-21 03:38:57.220329] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:30:09.775 [2024-11-21 03:38:57.220444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96156 ] 00:30:10.035 [2024-11-21 03:38:57.353244] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:10.035 [2024-11-21 03:38:57.384280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.035 [2024-11-21 03:38:57.404909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:10.605 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:10.866 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:11.126 { 00:30:11.126 "name": "nvme0n1", 00:30:11.126 "aliases": [ 00:30:11.126 "4d6d86af-9226-4f50-a415-3f1daacd7a9f" 00:30:11.126 ], 00:30:11.126 "product_name": "NVMe disk", 00:30:11.126 "block_size": 4096, 00:30:11.126 "num_blocks": 1310720, 00:30:11.126 "uuid": "4d6d86af-9226-4f50-a415-3f1daacd7a9f", 00:30:11.126 "numa_id": -1, 00:30:11.126 "assigned_rate_limits": { 00:30:11.126 "rw_ios_per_sec": 0, 00:30:11.126 "rw_mbytes_per_sec": 0, 00:30:11.126 "r_mbytes_per_sec": 0, 00:30:11.126 "w_mbytes_per_sec": 0 00:30:11.126 }, 00:30:11.126 "claimed": true, 00:30:11.126 "claim_type": "read_many_write_one", 00:30:11.126 "zoned": false, 00:30:11.126 "supported_io_types": { 00:30:11.126 "read": true, 00:30:11.126 "write": true, 00:30:11.126 "unmap": true, 00:30:11.126 "flush": true, 00:30:11.126 "reset": true, 00:30:11.126 "nvme_admin": true, 00:30:11.126 "nvme_io": true, 00:30:11.126 "nvme_io_md": false, 00:30:11.126 "write_zeroes": true, 00:30:11.126 "zcopy": false, 00:30:11.126 "get_zone_info": false, 00:30:11.126 "zone_management": false, 00:30:11.126 "zone_append": false, 00:30:11.126 "compare": true, 00:30:11.126 "compare_and_write": false, 00:30:11.126 "abort": true, 00:30:11.126 "seek_hole": false, 00:30:11.126 "seek_data": false, 00:30:11.126 "copy": true, 00:30:11.126 "nvme_iov_md": false 00:30:11.126 }, 00:30:11.126 "driver_specific": { 00:30:11.126 "nvme": [ 00:30:11.126 { 00:30:11.126 "pci_address": "0000:00:11.0", 00:30:11.126 "trid": { 00:30:11.126 "trtype": "PCIe", 00:30:11.126 "traddr": "0000:00:11.0" 00:30:11.126 }, 00:30:11.126 "ctrlr_data": { 00:30:11.126 "cntlid": 0, 00:30:11.126 "vendor_id": "0x1b36", 00:30:11.126 "model_number": "QEMU NVMe Ctrl", 00:30:11.126 "serial_number": "12341", 00:30:11.126 "firmware_revision": "8.0.0", 00:30:11.126 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:11.126 "oacs": { 00:30:11.126 "security": 0, 00:30:11.126 "format": 1, 00:30:11.126 "firmware": 0, 00:30:11.126 "ns_manage": 1 00:30:11.126 }, 00:30:11.126 "multi_ctrlr": false, 00:30:11.126 "ana_reporting": false 00:30:11.126 }, 00:30:11.126 "vs": { 00:30:11.126 "nvme_version": "1.4" 00:30:11.126 }, 00:30:11.126 "ns_data": { 00:30:11.126 "id": 1, 00:30:11.126 "can_share": false 00:30:11.126 } 00:30:11.126 } 00:30:11.126 ], 00:30:11.126 "mp_policy": "active_passive" 00:30:11.126 } 00:30:11.126 } 00:30:11.126 ]' 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=2f5d8abc-3783-400e-8d00-ca954519fca5 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:11.126 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f5d8abc-3783-400e-8d00-ca954519fca5 00:30:11.386 03:38:58 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:11.645 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=77e0680d-904c-4df4-bc95-bce8bcffd296 00:30:11.645 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 77e0680d-904c-4df4-bc95-bce8bcffd296 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4046a2a-2421-4f68-a00c-6435781b4089 00:30:11.904 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:11.904 { 00:30:11.904 "name": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:11.904 "aliases": [ 00:30:11.904 "lvs/nvme0n1p0" 00:30:11.904 ], 00:30:11.904 "product_name": "Logical Volume", 00:30:11.904 "block_size": 4096, 00:30:11.904 "num_blocks": 26476544, 00:30:11.904 "uuid": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:11.904 "assigned_rate_limits": { 00:30:11.904 "rw_ios_per_sec": 0, 00:30:11.904 "rw_mbytes_per_sec": 0, 00:30:11.904 "r_mbytes_per_sec": 0, 00:30:11.904 "w_mbytes_per_sec": 0 00:30:11.904 }, 00:30:11.904 "claimed": false, 00:30:11.904 "zoned": false, 00:30:11.904 "supported_io_types": { 00:30:11.904 "read": true, 00:30:11.904 "write": true, 00:30:11.904 "unmap": true, 00:30:11.904 "flush": false, 00:30:11.904 "reset": true, 00:30:11.904 "nvme_admin": false, 00:30:11.904 "nvme_io": false, 00:30:11.904 "nvme_io_md": false, 00:30:11.904 "write_zeroes": true, 00:30:11.904 "zcopy": false, 00:30:11.904 "get_zone_info": false, 00:30:11.904 "zone_management": false, 00:30:11.904 "zone_append": false, 00:30:11.904 "compare": false, 00:30:11.904 "compare_and_write": false, 00:30:11.904 "abort": false, 00:30:11.904 "seek_hole": true, 00:30:11.904 "seek_data": true, 00:30:11.904 "copy": false, 00:30:11.904 "nvme_iov_md": false 00:30:11.904 }, 00:30:11.904 "driver_specific": { 00:30:11.904 "lvol": { 00:30:11.904 "lvol_store_uuid": "77e0680d-904c-4df4-bc95-bce8bcffd296", 00:30:11.904 "base_bdev": "nvme0n1", 00:30:11.904 "thin_provision": true, 00:30:11.904 "num_allocated_clusters": 0, 00:30:11.904 "snapshot": false, 00:30:11.904 "clone": false, 00:30:11.905 "esnap_clone": false 00:30:11.905 } 00:30:11.905 } 00:30:11.905 } 00:30:11.905 ]' 00:30:11.905 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:12.165 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:12.425 { 00:30:12.425 "name": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:12.425 "aliases": [ 00:30:12.425 "lvs/nvme0n1p0" 00:30:12.425 ], 00:30:12.425 "product_name": "Logical Volume", 00:30:12.425 "block_size": 4096, 00:30:12.425 "num_blocks": 26476544, 00:30:12.425 "uuid": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:12.425 "assigned_rate_limits": { 00:30:12.425 "rw_ios_per_sec": 0, 00:30:12.425 "rw_mbytes_per_sec": 0, 00:30:12.425 "r_mbytes_per_sec": 0, 00:30:12.425 "w_mbytes_per_sec": 0 00:30:12.425 }, 00:30:12.425 "claimed": false, 00:30:12.425 "zoned": false, 00:30:12.425 "supported_io_types": { 00:30:12.425 "read": true, 00:30:12.425 "write": true, 00:30:12.425 "unmap": true, 00:30:12.425 "flush": false, 00:30:12.425 "reset": true, 00:30:12.425 "nvme_admin": false, 00:30:12.425 "nvme_io": false, 00:30:12.425 "nvme_io_md": false, 00:30:12.425 "write_zeroes": true, 00:30:12.425 "zcopy": false, 00:30:12.425 "get_zone_info": false, 00:30:12.425 "zone_management": false, 00:30:12.425 "zone_append": false, 00:30:12.425 "compare": false, 00:30:12.425 "compare_and_write": false, 00:30:12.425 "abort": false, 00:30:12.425 "seek_hole": true, 00:30:12.425 "seek_data": true, 00:30:12.425 "copy": false, 00:30:12.425 "nvme_iov_md": false 00:30:12.425 }, 00:30:12.425 "driver_specific": { 00:30:12.425 "lvol": { 00:30:12.425 "lvol_store_uuid": "77e0680d-904c-4df4-bc95-bce8bcffd296", 00:30:12.425 "base_bdev": "nvme0n1", 00:30:12.425 "thin_provision": true, 00:30:12.425 "num_allocated_clusters": 0, 00:30:12.425 "snapshot": false, 00:30:12.425 "clone": false, 00:30:12.425 "esnap_clone": false 00:30:12.425 } 00:30:12.425 } 00:30:12.425 } 00:30:12.425 ]' 00:30:12.425 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:12.685 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:12.685 03:38:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:12.685 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4046a2a-2421-4f68-a00c-6435781b4089 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:12.945 { 00:30:12.945 "name": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:12.945 "aliases": [ 00:30:12.945 "lvs/nvme0n1p0" 00:30:12.945 ], 00:30:12.945 "product_name": "Logical Volume", 00:30:12.945 "block_size": 4096, 00:30:12.945 "num_blocks": 26476544, 00:30:12.945 "uuid": "d4046a2a-2421-4f68-a00c-6435781b4089", 00:30:12.945 "assigned_rate_limits": { 00:30:12.945 "rw_ios_per_sec": 0, 00:30:12.945 "rw_mbytes_per_sec": 0, 00:30:12.945 "r_mbytes_per_sec": 0, 00:30:12.945 "w_mbytes_per_sec": 0 00:30:12.945 }, 00:30:12.945 "claimed": false, 00:30:12.945 "zoned": false, 00:30:12.945 "supported_io_types": { 00:30:12.945 "read": true, 00:30:12.945 "write": true, 00:30:12.945 "unmap": true, 00:30:12.945 "flush": false, 00:30:12.945 "reset": true, 00:30:12.945 "nvme_admin": false, 00:30:12.945 "nvme_io": false, 00:30:12.945 "nvme_io_md": false, 00:30:12.945 "write_zeroes": true, 00:30:12.945 "zcopy": false, 00:30:12.945 "get_zone_info": false, 00:30:12.945 "zone_management": false, 00:30:12.945 "zone_append": false, 00:30:12.945 "compare": false, 00:30:12.945 "compare_and_write": false, 00:30:12.945 "abort": false, 00:30:12.945 "seek_hole": true, 00:30:12.945 "seek_data": true, 00:30:12.945 "copy": false, 00:30:12.945 "nvme_iov_md": false 00:30:12.945 }, 00:30:12.945 "driver_specific": { 00:30:12.945 "lvol": { 00:30:12.945 "lvol_store_uuid": "77e0680d-904c-4df4-bc95-bce8bcffd296", 00:30:12.945 "base_bdev": "nvme0n1", 00:30:12.945 "thin_provision": true, 00:30:12.945 "num_allocated_clusters": 0, 00:30:12.945 "snapshot": false, 00:30:12.945 "clone": false, 00:30:12.945 "esnap_clone": false 00:30:12.945 } 00:30:12.945 } 00:30:12.945 } 00:30:12.945 ]' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d4046a2a-2421-4f68-a00c-6435781b4089 --l2p_dram_limit 10' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:12.945 03:39:00 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d4046a2a-2421-4f68-a00c-6435781b4089 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:13.208 [2024-11-21 03:39:00.636349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.636392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:13.208 [2024-11-21 03:39:00.636404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:13.208 [2024-11-21 03:39:00.636411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.636461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.636469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.208 [2024-11-21 03:39:00.636481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:13.208 [2024-11-21 03:39:00.636486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.636505] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:13.208 [2024-11-21 03:39:00.636792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:13.208 [2024-11-21 03:39:00.636809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.636815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.208 [2024-11-21 03:39:00.636822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:30:13.208 [2024-11-21 03:39:00.636828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.636858] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9bb65078-7682-42ba-84f4-487ad2767c85 00:30:13.208 [2024-11-21 03:39:00.637852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.637882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:13.208 [2024-11-21 03:39:00.637890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:13.208 [2024-11-21 03:39:00.637907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.642718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.642750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.208 [2024-11-21 03:39:00.642757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:30:13.208 [2024-11-21 03:39:00.642767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.642837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.642845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.208 [2024-11-21 03:39:00.642852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:13.208 [2024-11-21 03:39:00.642859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.208 [2024-11-21 03:39:00.642910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.208 [2024-11-21 03:39:00.642921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:13.208 [2024-11-21 03:39:00.642928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:13.209 [2024-11-21 03:39:00.642935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.209 [2024-11-21 03:39:00.642952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:13.209 [2024-11-21 03:39:00.644215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.209 [2024-11-21 03:39:00.644242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.209 [2024-11-21 03:39:00.644252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.265 ms 00:30:13.209 [2024-11-21 03:39:00.644259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.209 [2024-11-21 03:39:00.644286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.209 [2024-11-21 03:39:00.644293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:13.209 [2024-11-21 03:39:00.644303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:13.209 [2024-11-21 03:39:00.644309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.209 [2024-11-21 03:39:00.644324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:13.209 [2024-11-21 03:39:00.644433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:13.209 [2024-11-21 03:39:00.644444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:13.209 [2024-11-21 03:39:00.644453] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:13.209 [2024-11-21 03:39:00.644463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644475] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:13.209 [2024-11-21 03:39:00.644497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:13.209 [2024-11-21 03:39:00.644505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:13.209 [2024-11-21 03:39:00.644512] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:13.209 [2024-11-21 03:39:00.644523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.209 [2024-11-21 03:39:00.644529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:13.209 [2024-11-21 03:39:00.644539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:30:13.209 [2024-11-21 03:39:00.644545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.209 [2024-11-21 03:39:00.644610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.209 [2024-11-21 03:39:00.644615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:13.209 [2024-11-21 03:39:00.644623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:13.209 [2024-11-21 03:39:00.644629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.209 [2024-11-21 03:39:00.644699] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:13.209 [2024-11-21 03:39:00.644712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:13.209 [2024-11-21 03:39:00.644720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:13.209 [2024-11-21 03:39:00.644737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:13.209 [2024-11-21 03:39:00.644757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.209 [2024-11-21 03:39:00.644770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:13.209 [2024-11-21 03:39:00.644775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:13.209 [2024-11-21 03:39:00.644783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.209 [2024-11-21 03:39:00.644788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:13.209 [2024-11-21 03:39:00.644795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:13.209 [2024-11-21 03:39:00.644800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:13.209 [2024-11-21 03:39:00.644811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:13.209 [2024-11-21 03:39:00.644828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:13.209 [2024-11-21 03:39:00.644843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:13.209 [2024-11-21 03:39:00.644860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:13.209 [2024-11-21 03:39:00.644877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:13.209 [2024-11-21 03:39:00.644895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.209 [2024-11-21 03:39:00.644924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:13.209 [2024-11-21 03:39:00.644929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:13.209 [2024-11-21 03:39:00.644935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.209 [2024-11-21 03:39:00.644940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:13.209 [2024-11-21 03:39:00.644946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:13.209 [2024-11-21 03:39:00.644951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:13.209 [2024-11-21 03:39:00.644963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:13.209 [2024-11-21 03:39:00.644970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.644975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:13.209 [2024-11-21 03:39:00.644984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:13.209 [2024-11-21 03:39:00.644989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.209 [2024-11-21 03:39:00.644996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.209 [2024-11-21 03:39:00.645002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:13.209 [2024-11-21 03:39:00.645009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:13.209 [2024-11-21 03:39:00.645013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:13.209 [2024-11-21 03:39:00.645020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:13.209 [2024-11-21 03:39:00.645025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:13.209 [2024-11-21 03:39:00.645031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:13.209 [2024-11-21 03:39:00.645039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:13.209 [2024-11-21 03:39:00.645048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.209 [2024-11-21 03:39:00.645055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:13.209 [2024-11-21 03:39:00.645062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:13.209 [2024-11-21 03:39:00.645067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:13.209 [2024-11-21 03:39:00.645074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:13.209 [2024-11-21 03:39:00.645080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:13.209 [2024-11-21 03:39:00.645087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:13.209 [2024-11-21 03:39:00.645092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:13.209 [2024-11-21 03:39:00.645099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:13.209 [2024-11-21 03:39:00.645104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:13.209 [2024-11-21 03:39:00.645111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:13.209 [2024-11-21 03:39:00.645116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:13.209 [2024-11-21 03:39:00.645122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:13.209 [2024-11-21 03:39:00.645127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:13.209 [2024-11-21 03:39:00.645135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:13.209 [2024-11-21 03:39:00.645140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:13.210 [2024-11-21 03:39:00.645147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.210 [2024-11-21 03:39:00.645152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:13.210 [2024-11-21 03:39:00.645159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:13.210 [2024-11-21 03:39:00.645165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:13.210 [2024-11-21 03:39:00.645172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:13.210 [2024-11-21 03:39:00.645178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.210 [2024-11-21 03:39:00.645186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:13.210 [2024-11-21 03:39:00.645192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:30:13.210 [2024-11-21 03:39:00.645198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.210 [2024-11-21 03:39:00.645226] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:13.210 [2024-11-21 03:39:00.645234] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:17.415 [2024-11-21 03:39:04.775710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.775783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:17.415 [2024-11-21 03:39:04.775801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4130.462 ms 00:30:17.415 [2024-11-21 03:39:04.775812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.789819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.789879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:17.415 [2024-11-21 03:39:04.789894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.834 ms 00:30:17.415 [2024-11-21 03:39:04.789925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.790053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.790068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:17.415 [2024-11-21 03:39:04.790079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:17.415 [2024-11-21 03:39:04.790090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.802792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.802847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:17.415 [2024-11-21 03:39:04.802859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:30:17.415 [2024-11-21 03:39:04.802871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.802919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.802936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.415 [2024-11-21 03:39:04.802946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:17.415 [2024-11-21 03:39:04.802956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.803488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.803530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.415 [2024-11-21 03:39:04.803541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:30:17.415 [2024-11-21 03:39:04.803561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.803681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.803692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.415 [2024-11-21 03:39:04.803701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:30:17.415 [2024-11-21 03:39:04.803711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.811954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.812004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.415 [2024-11-21 03:39:04.812015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.222 ms 00:30:17.415 [2024-11-21 03:39:04.812027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.821998] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:17.415 [2024-11-21 03:39:04.825768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.825810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:17.415 [2024-11-21 03:39:04.825824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.660 ms 00:30:17.415 [2024-11-21 03:39:04.825832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.923609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.923675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:17.415 [2024-11-21 03:39:04.923699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.738 ms 00:30:17.415 [2024-11-21 03:39:04.923709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.923936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.923949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:17.415 [2024-11-21 03:39:04.923961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:30:17.415 [2024-11-21 03:39:04.923970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.929770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.415 [2024-11-21 03:39:04.929823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:17.415 [2024-11-21 03:39:04.929840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.772 ms 00:30:17.415 [2024-11-21 03:39:04.929849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.415 [2024-11-21 03:39:04.934791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.416 [2024-11-21 03:39:04.934838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:17.416 [2024-11-21 03:39:04.934852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.890 ms 00:30:17.416 [2024-11-21 03:39:04.934860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.416 [2024-11-21 03:39:04.935232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.416 [2024-11-21 03:39:04.935400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:17.416 [2024-11-21 03:39:04.935416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:30:17.416 [2024-11-21 03:39:04.935428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:04.983383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:04.983437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:17.677 [2024-11-21 03:39:04.983455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.925 ms 00:30:17.677 [2024-11-21 03:39:04.983464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:04.990486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:04.990537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:17.677 [2024-11-21 03:39:04.990551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:30:17.677 [2024-11-21 03:39:04.990559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:04.996013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:04.996061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:17.677 [2024-11-21 03:39:04.996074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.400 ms 00:30:17.677 [2024-11-21 03:39:04.996082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:05.002080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:05.002126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:17.677 [2024-11-21 03:39:05.002143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.946 ms 00:30:17.677 [2024-11-21 03:39:05.002150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:05.002204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:05.002213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:17.677 [2024-11-21 03:39:05.002224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:17.677 [2024-11-21 03:39:05.002232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:05.002322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.677 [2024-11-21 03:39:05.002333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:17.677 [2024-11-21 03:39:05.002351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:17.677 [2024-11-21 03:39:05.002362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.677 [2024-11-21 03:39:05.003735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4366.899 ms, result 0 00:30:17.677 { 00:30:17.677 "name": "ftl0", 00:30:17.677 "uuid": "9bb65078-7682-42ba-84f4-487ad2767c85" 00:30:17.677 } 00:30:17.677 03:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:17.677 03:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:17.939 03:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:17.939 03:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:17.939 [2024-11-21 03:39:05.445622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.445689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:17.939 [2024-11-21 03:39:05.445707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:17.939 [2024-11-21 03:39:05.445719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.445745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:17.939 [2024-11-21 03:39:05.446576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.446613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:17.939 [2024-11-21 03:39:05.446627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:30:17.939 [2024-11-21 03:39:05.446636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.446934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.446947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:17.939 [2024-11-21 03:39:05.446960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:30:17.939 [2024-11-21 03:39:05.446972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.450221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.450245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:17.939 [2024-11-21 03:39:05.450257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:30:17.939 [2024-11-21 03:39:05.450266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.456688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.456727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:17.939 [2024-11-21 03:39:05.456741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.392 ms 00:30:17.939 [2024-11-21 03:39:05.456757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.459770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.459824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:17.939 [2024-11-21 03:39:05.459837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:30:17.939 [2024-11-21 03:39:05.459844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.466193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.466245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:17.939 [2024-11-21 03:39:05.466258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.296 ms 00:30:17.939 [2024-11-21 03:39:05.466266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.939 [2024-11-21 03:39:05.466418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.939 [2024-11-21 03:39:05.466429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:17.939 [2024-11-21 03:39:05.466444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:30:17.939 [2024-11-21 03:39:05.466452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.940 [2024-11-21 03:39:05.469418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.940 [2024-11-21 03:39:05.469466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:17.940 [2024-11-21 03:39:05.469478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:30:17.940 [2024-11-21 03:39:05.469485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.940 [2024-11-21 03:39:05.472484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.940 [2024-11-21 03:39:05.472529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:17.940 [2024-11-21 03:39:05.472540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:30:17.940 [2024-11-21 03:39:05.472547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.940 [2024-11-21 03:39:05.475000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.940 [2024-11-21 03:39:05.475046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:17.940 [2024-11-21 03:39:05.475058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:30:17.940 [2024-11-21 03:39:05.475066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.940 [2024-11-21 03:39:05.477181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.940 [2024-11-21 03:39:05.477225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:17.940 [2024-11-21 03:39:05.477238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:30:17.940 [2024-11-21 03:39:05.477244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.940 [2024-11-21 03:39:05.477305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:17.940 [2024-11-21 03:39:05.477322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.477991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.478001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.478009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:17.940 [2024-11-21 03:39:05.478019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:17.941 [2024-11-21 03:39:05.478257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:17.941 [2024-11-21 03:39:05.478273] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb65078-7682-42ba-84f4-487ad2767c85 00:30:17.941 [2024-11-21 03:39:05.478281] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:17.941 [2024-11-21 03:39:05.478291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:17.941 [2024-11-21 03:39:05.478299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:17.941 [2024-11-21 03:39:05.478323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:17.941 [2024-11-21 03:39:05.478331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:17.941 [2024-11-21 03:39:05.478345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:17.941 [2024-11-21 03:39:05.478353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:17.941 [2024-11-21 03:39:05.478362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:17.941 [2024-11-21 03:39:05.478370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:17.941 [2024-11-21 03:39:05.478380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.941 [2024-11-21 03:39:05.478388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:17.941 [2024-11-21 03:39:05.478400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:30:17.941 [2024-11-21 03:39:05.478407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.480709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.941 [2024-11-21 03:39:05.480747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:17.941 [2024-11-21 03:39:05.480759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:30:17.941 [2024-11-21 03:39:05.480769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.480883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.941 [2024-11-21 03:39:05.480920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:17.941 [2024-11-21 03:39:05.480934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:30:17.941 [2024-11-21 03:39:05.480942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.488932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.941 [2024-11-21 03:39:05.488976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.941 [2024-11-21 03:39:05.488992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.941 [2024-11-21 03:39:05.489001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.489069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.941 [2024-11-21 03:39:05.489078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.941 [2024-11-21 03:39:05.489089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.941 [2024-11-21 03:39:05.489096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.489185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.941 [2024-11-21 03:39:05.489196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.941 [2024-11-21 03:39:05.489206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.941 [2024-11-21 03:39:05.489216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.941 [2024-11-21 03:39:05.489236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.941 [2024-11-21 03:39:05.489244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.941 [2024-11-21 03:39:05.489254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.941 [2024-11-21 03:39:05.489262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.503631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.503689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:18.203 [2024-11-21 03:39:05.503702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.503714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.515514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.515572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:18.203 [2024-11-21 03:39:05.515585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.515594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.515679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.515689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:18.203 [2024-11-21 03:39:05.515700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.515709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.515765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.515775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:18.203 [2024-11-21 03:39:05.515786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.515793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.515869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.515879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:18.203 [2024-11-21 03:39:05.515889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.515915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.515952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.515964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:18.203 [2024-11-21 03:39:05.515975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.515982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.516029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.516039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:18.203 [2024-11-21 03:39:05.516049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.516057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.516112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.203 [2024-11-21 03:39:05.516121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:18.203 [2024-11-21 03:39:05.516132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.203 [2024-11-21 03:39:05.516140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.203 [2024-11-21 03:39:05.516295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.620 ms, result 0 00:30:18.203 true 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96156 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96156 ']' 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96156 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96156 00:30:18.203 killing process with pid 96156 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96156' 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96156 00:30:18.203 03:39:05 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96156 00:30:23.491 03:39:09 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:26.040 262144+0 records in 00:30:26.040 262144+0 records out 00:30:26.040 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62692 s, 296 MB/s 00:30:26.301 03:39:13 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:28.852 03:39:15 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:28.853 [2024-11-21 03:39:15.875540] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:30:28.853 [2024-11-21 03:39:15.875635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96367 ] 00:30:28.853 [2024-11-21 03:39:16.000779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.853 [2024-11-21 03:39:16.031398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.853 [2024-11-21 03:39:16.059539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.853 [2024-11-21 03:39:16.174442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.853 [2024-11-21 03:39:16.174526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.853 [2024-11-21 03:39:16.335929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.335993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.853 [2024-11-21 03:39:16.336008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:28.853 [2024-11-21 03:39:16.336017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.336074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.336085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.853 [2024-11-21 03:39:16.336094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:28.853 [2024-11-21 03:39:16.336102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.336129] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.853 [2024-11-21 03:39:16.336518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.853 [2024-11-21 03:39:16.336561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.336569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.853 [2024-11-21 03:39:16.336582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:30:28.853 [2024-11-21 03:39:16.336590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.338404] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:28.853 [2024-11-21 03:39:16.342170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.342225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:28.853 [2024-11-21 03:39:16.342240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:30:28.853 [2024-11-21 03:39:16.342254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.342342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.342353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:28.853 [2024-11-21 03:39:16.342363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:28.853 [2024-11-21 03:39:16.342371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.350241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.350281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.853 [2024-11-21 03:39:16.350294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.820 ms 00:30:28.853 [2024-11-21 03:39:16.350302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.350419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.350429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.853 [2024-11-21 03:39:16.350441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:28.853 [2024-11-21 03:39:16.350449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.350508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.350518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.853 [2024-11-21 03:39:16.350526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:28.853 [2024-11-21 03:39:16.350537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.350562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.853 [2024-11-21 03:39:16.352575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.352613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.853 [2024-11-21 03:39:16.352623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:30:28.853 [2024-11-21 03:39:16.352630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.352671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.352680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.853 [2024-11-21 03:39:16.352689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:28.853 [2024-11-21 03:39:16.352703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.352725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:28.853 [2024-11-21 03:39:16.352745] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:28.853 [2024-11-21 03:39:16.352790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:28.853 [2024-11-21 03:39:16.352806] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:28.853 [2024-11-21 03:39:16.352958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.853 [2024-11-21 03:39:16.352971] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.853 [2024-11-21 03:39:16.352986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.853 [2024-11-21 03:39:16.352997] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353006] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353017] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.853 [2024-11-21 03:39:16.353025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.853 [2024-11-21 03:39:16.353033] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.853 [2024-11-21 03:39:16.353040] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.853 [2024-11-21 03:39:16.353048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.353057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.853 [2024-11-21 03:39:16.353067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:30:28.853 [2024-11-21 03:39:16.353074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.353162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.853 [2024-11-21 03:39:16.353170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.853 [2024-11-21 03:39:16.353178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:28.853 [2024-11-21 03:39:16.353186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.853 [2024-11-21 03:39:16.353285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.853 [2024-11-21 03:39:16.353304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.853 [2024-11-21 03:39:16.353313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.853 [2024-11-21 03:39:16.353335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.853 [2024-11-21 03:39:16.353365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.853 [2024-11-21 03:39:16.353383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.853 [2024-11-21 03:39:16.353390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.853 [2024-11-21 03:39:16.353397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.853 [2024-11-21 03:39:16.353404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.853 [2024-11-21 03:39:16.353411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.853 [2024-11-21 03:39:16.353418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.853 [2024-11-21 03:39:16.353433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.853 [2024-11-21 03:39:16.353452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.853 [2024-11-21 03:39:16.353472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.853 [2024-11-21 03:39:16.353497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.853 [2024-11-21 03:39:16.353503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.853 [2024-11-21 03:39:16.353510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.853 [2024-11-21 03:39:16.353516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.854 [2024-11-21 03:39:16.353523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.854 [2024-11-21 03:39:16.353530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.854 [2024-11-21 03:39:16.353536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.854 [2024-11-21 03:39:16.353543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.854 [2024-11-21 03:39:16.353550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.854 [2024-11-21 03:39:16.353556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.854 [2024-11-21 03:39:16.353563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.854 [2024-11-21 03:39:16.353569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.854 [2024-11-21 03:39:16.353575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.854 [2024-11-21 03:39:16.353587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.854 [2024-11-21 03:39:16.353593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.854 [2024-11-21 03:39:16.353604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.854 [2024-11-21 03:39:16.353611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.854 [2024-11-21 03:39:16.353617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.854 [2024-11-21 03:39:16.353628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.854 [2024-11-21 03:39:16.353635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.854 [2024-11-21 03:39:16.353642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.854 [2024-11-21 03:39:16.353650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.854 [2024-11-21 03:39:16.353657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.854 [2024-11-21 03:39:16.353663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.854 [2024-11-21 03:39:16.353670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.854 [2024-11-21 03:39:16.353676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.854 [2024-11-21 03:39:16.353683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.854 [2024-11-21 03:39:16.353692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.854 [2024-11-21 03:39:16.353701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.854 [2024-11-21 03:39:16.353717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.854 [2024-11-21 03:39:16.353727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.854 [2024-11-21 03:39:16.353734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.854 [2024-11-21 03:39:16.353742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.854 [2024-11-21 03:39:16.353749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.854 [2024-11-21 03:39:16.353756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.854 [2024-11-21 03:39:16.353763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.854 [2024-11-21 03:39:16.353771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.854 [2024-11-21 03:39:16.353778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.854 [2024-11-21 03:39:16.353812] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.854 [2024-11-21 03:39:16.353820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.854 [2024-11-21 03:39:16.353839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.854 [2024-11-21 03:39:16.353848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.854 [2024-11-21 03:39:16.353856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.854 [2024-11-21 03:39:16.353863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.353871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.854 [2024-11-21 03:39:16.353878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:30:28.854 [2024-11-21 03:39:16.353885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.367711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.367760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.854 [2024-11-21 03:39:16.367772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.756 ms 00:30:28.854 [2024-11-21 03:39:16.367781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.367864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.367874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:28.854 [2024-11-21 03:39:16.367893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:28.854 [2024-11-21 03:39:16.367916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.387489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.387555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.854 [2024-11-21 03:39:16.387572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.512 ms 00:30:28.854 [2024-11-21 03:39:16.387584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.387643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.387657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.854 [2024-11-21 03:39:16.387677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:28.854 [2024-11-21 03:39:16.387690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.388336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.388377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.854 [2024-11-21 03:39:16.388402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:28.854 [2024-11-21 03:39:16.388415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.388610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.388623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.854 [2024-11-21 03:39:16.388635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:30:28.854 [2024-11-21 03:39:16.388646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.397234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.397287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.854 [2024-11-21 03:39:16.397301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.561 ms 00:30:28.854 [2024-11-21 03:39:16.397311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.854 [2024-11-21 03:39:16.401217] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:28.854 [2024-11-21 03:39:16.401267] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:28.854 [2024-11-21 03:39:16.401279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.854 [2024-11-21 03:39:16.401287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:28.854 [2024-11-21 03:39:16.401297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:30:28.854 [2024-11-21 03:39:16.401304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.416713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.416770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:29.117 [2024-11-21 03:39:16.416782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.357 ms 00:30:29.117 [2024-11-21 03:39:16.416791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.419803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.419853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:29.117 [2024-11-21 03:39:16.419864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:30:29.117 [2024-11-21 03:39:16.419871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.422423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.422468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:29.117 [2024-11-21 03:39:16.422478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:30:29.117 [2024-11-21 03:39:16.422493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.422837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.422869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:29.117 [2024-11-21 03:39:16.422879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:29.117 [2024-11-21 03:39:16.422887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.447304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.447375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:29.117 [2024-11-21 03:39:16.447389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.381 ms 00:30:29.117 [2024-11-21 03:39:16.447398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.455654] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:29.117 [2024-11-21 03:39:16.458742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.458783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:29.117 [2024-11-21 03:39:16.458805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:30:29.117 [2024-11-21 03:39:16.458813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.458894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.458927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:29.117 [2024-11-21 03:39:16.458937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:29.117 [2024-11-21 03:39:16.458945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.459019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.459031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:29.117 [2024-11-21 03:39:16.459040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:29.117 [2024-11-21 03:39:16.459052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.459076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.459085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:29.117 [2024-11-21 03:39:16.459099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:29.117 [2024-11-21 03:39:16.459110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.459143] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:29.117 [2024-11-21 03:39:16.459153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.459161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:29.117 [2024-11-21 03:39:16.459169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:29.117 [2024-11-21 03:39:16.459177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.464828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.464881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:29.117 [2024-11-21 03:39:16.464892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.627 ms 00:30:29.117 [2024-11-21 03:39:16.464927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.465012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.117 [2024-11-21 03:39:16.465026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:29.117 [2024-11-21 03:39:16.465035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:29.117 [2024-11-21 03:39:16.465043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.117 [2024-11-21 03:39:16.466183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.799 ms, result 0 00:30:30.063  [2024-11-21T03:39:18.573Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-21T03:39:19.516Z] Copying: 29/1024 [MB] (17 MBps) [2024-11-21T03:39:20.904Z] Copying: 51/1024 [MB] (21 MBps) [2024-11-21T03:39:21.848Z] Copying: 65/1024 [MB] (13 MBps) [2024-11-21T03:39:22.792Z] Copying: 84/1024 [MB] (19 MBps) [2024-11-21T03:39:23.736Z] Copying: 102/1024 [MB] (18 MBps) [2024-11-21T03:39:24.681Z] Copying: 139/1024 [MB] (36 MBps) [2024-11-21T03:39:25.626Z] Copying: 165/1024 [MB] (26 MBps) [2024-11-21T03:39:26.569Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-21T03:39:27.511Z] Copying: 195/1024 [MB] (18 MBps) [2024-11-21T03:39:28.989Z] Copying: 210/1024 [MB] (14 MBps) [2024-11-21T03:39:29.594Z] Copying: 242/1024 [MB] (32 MBps) [2024-11-21T03:39:30.537Z] Copying: 282/1024 [MB] (40 MBps) [2024-11-21T03:39:31.480Z] Copying: 321/1024 [MB] (38 MBps) [2024-11-21T03:39:32.869Z] Copying: 336/1024 [MB] (14 MBps) [2024-11-21T03:39:33.814Z] Copying: 349/1024 [MB] (12 MBps) [2024-11-21T03:39:34.759Z] Copying: 367/1024 [MB] (17 MBps) [2024-11-21T03:39:35.702Z] Copying: 384/1024 [MB] (17 MBps) [2024-11-21T03:39:36.644Z] Copying: 401/1024 [MB] (17 MBps) [2024-11-21T03:39:37.588Z] Copying: 428/1024 [MB] (26 MBps) [2024-11-21T03:39:38.531Z] Copying: 448/1024 [MB] (19 MBps) [2024-11-21T03:39:39.912Z] Copying: 465/1024 [MB] (17 MBps) [2024-11-21T03:39:40.482Z] Copying: 498/1024 [MB] (33 MBps) [2024-11-21T03:39:41.865Z] Copying: 532/1024 [MB] (33 MBps) [2024-11-21T03:39:42.806Z] Copying: 550/1024 [MB] (17 MBps) [2024-11-21T03:39:43.747Z] Copying: 568/1024 [MB] (18 MBps) [2024-11-21T03:39:44.691Z] Copying: 581/1024 [MB] (12 MBps) [2024-11-21T03:39:45.633Z] Copying: 610/1024 [MB] (28 MBps) [2024-11-21T03:39:46.576Z] Copying: 649/1024 [MB] (39 MBps) [2024-11-21T03:39:47.520Z] Copying: 686/1024 [MB] (36 MBps) [2024-11-21T03:39:48.909Z] Copying: 704/1024 [MB] (17 MBps) [2024-11-21T03:39:49.482Z] Copying: 720/1024 [MB] (15 MBps) [2024-11-21T03:39:50.868Z] Copying: 735/1024 [MB] (15 MBps) [2024-11-21T03:39:51.812Z] Copying: 753/1024 [MB] (17 MBps) [2024-11-21T03:39:52.757Z] Copying: 772/1024 [MB] (18 MBps) [2024-11-21T03:39:53.700Z] Copying: 787/1024 [MB] (15 MBps) [2024-11-21T03:39:54.644Z] Copying: 804/1024 [MB] (16 MBps) [2024-11-21T03:39:55.588Z] Copying: 825/1024 [MB] (20 MBps) [2024-11-21T03:39:56.533Z] Copying: 842/1024 [MB] (16 MBps) [2024-11-21T03:39:57.919Z] Copying: 858/1024 [MB] (16 MBps) [2024-11-21T03:39:58.492Z] Copying: 879/1024 [MB] (20 MBps) [2024-11-21T03:39:59.878Z] Copying: 892/1024 [MB] (13 MBps) [2024-11-21T03:40:00.825Z] Copying: 910/1024 [MB] (18 MBps) [2024-11-21T03:40:01.498Z] Copying: 938/1024 [MB] (27 MBps) [2024-11-21T03:40:02.882Z] Copying: 956/1024 [MB] (17 MBps) [2024-11-21T03:40:03.826Z] Copying: 972/1024 [MB] (16 MBps) [2024-11-21T03:40:04.771Z] Copying: 991/1024 [MB] (19 MBps) [2024-11-21T03:40:05.345Z] Copying: 1011/1024 [MB] (20 MBps) [2024-11-21T03:40:05.345Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-21 03:40:05.203888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.780 [2024-11-21 03:40:05.204025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:17.780 [2024-11-21 03:40:05.204045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.780 [2024-11-21 03:40:05.204055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.780 [2024-11-21 03:40:05.204087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:17.780 [2024-11-21 03:40:05.205098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.780 [2024-11-21 03:40:05.205132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:17.780 [2024-11-21 03:40:05.205145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:31:17.780 [2024-11-21 03:40:05.205155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.780 [2024-11-21 03:40:05.208083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.780 [2024-11-21 03:40:05.208138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:17.780 [2024-11-21 03:40:05.208151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:31:17.780 [2024-11-21 03:40:05.208160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.780 [2024-11-21 03:40:05.208199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.780 [2024-11-21 03:40:05.208216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:17.780 [2024-11-21 03:40:05.208225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.780 [2024-11-21 03:40:05.208234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.780 [2024-11-21 03:40:05.208299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.780 [2024-11-21 03:40:05.208310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:17.780 [2024-11-21 03:40:05.208320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:17.780 [2024-11-21 03:40:05.208328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.780 [2024-11-21 03:40:05.208343] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:17.780 [2024-11-21 03:40:05.208358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:17.780 [2024-11-21 03:40:05.208410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.208991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:17.781 [2024-11-21 03:40:05.209151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:17.782 [2024-11-21 03:40:05.209158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:17.782 [2024-11-21 03:40:05.209166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:17.782 [2024-11-21 03:40:05.209175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:17.782 [2024-11-21 03:40:05.209183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:17.782 [2024-11-21 03:40:05.209200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:17.782 [2024-11-21 03:40:05.209209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb65078-7682-42ba-84f4-487ad2767c85 00:31:17.782 [2024-11-21 03:40:05.209218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:17.782 [2024-11-21 03:40:05.209230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:17.782 [2024-11-21 03:40:05.209238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:17.782 [2024-11-21 03:40:05.209247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:17.782 [2024-11-21 03:40:05.209256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:17.782 [2024-11-21 03:40:05.209265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:17.782 [2024-11-21 03:40:05.209274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:17.782 [2024-11-21 03:40:05.209280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:17.782 [2024-11-21 03:40:05.209287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:17.782 [2024-11-21 03:40:05.209295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.782 [2024-11-21 03:40:05.209303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:17.782 [2024-11-21 03:40:05.209312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:31:17.782 [2024-11-21 03:40:05.209324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.212555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.782 [2024-11-21 03:40:05.212599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:17.782 [2024-11-21 03:40:05.212611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:31:17.782 [2024-11-21 03:40:05.212629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.212796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.782 [2024-11-21 03:40:05.212807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:17.782 [2024-11-21 03:40:05.212824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:31:17.782 [2024-11-21 03:40:05.212831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.223252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.223302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:17.782 [2024-11-21 03:40:05.223313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.223322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.223396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.223405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:17.782 [2024-11-21 03:40:05.223427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.223437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.223495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.223507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:17.782 [2024-11-21 03:40:05.223516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.223524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.223540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.223551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:17.782 [2024-11-21 03:40:05.223559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.223570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.242973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.243023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:17.782 [2024-11-21 03:40:05.243035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.243044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.257480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.257536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:17.782 [2024-11-21 03:40:05.257559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.257569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.257662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.257682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:17.782 [2024-11-21 03:40:05.257692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.257700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.257742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.257754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:17.782 [2024-11-21 03:40:05.257767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.257776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.257843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.257855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:17.782 [2024-11-21 03:40:05.257864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.257873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.257943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.257955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:17.782 [2024-11-21 03:40:05.257969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.257978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.258032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.258042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:17.782 [2024-11-21 03:40:05.258051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.258059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.258113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.782 [2024-11-21 03:40:05.258124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:17.782 [2024-11-21 03:40:05.258132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.782 [2024-11-21 03:40:05.258140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.782 [2024-11-21 03:40:05.258300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.375 ms, result 0 00:31:18.355 00:31:18.355 00:31:18.355 03:40:05 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:18.355 [2024-11-21 03:40:05.874308] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:31:18.355 [2024-11-21 03:40:05.874463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96870 ] 00:31:18.616 [2024-11-21 03:40:06.011761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:18.616 [2024-11-21 03:40:06.042372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.616 [2024-11-21 03:40:06.083095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.878 [2024-11-21 03:40:06.235293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.878 [2024-11-21 03:40:06.235386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.878 [2024-11-21 03:40:06.400428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.400497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:18.878 [2024-11-21 03:40:06.400515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:18.878 [2024-11-21 03:40:06.400524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.400586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.400598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.878 [2024-11-21 03:40:06.400607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:18.878 [2024-11-21 03:40:06.400616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.400642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:18.878 [2024-11-21 03:40:06.401310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:18.878 [2024-11-21 03:40:06.401365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.401383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.878 [2024-11-21 03:40:06.401398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:31:18.878 [2024-11-21 03:40:06.401410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.401883] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:18.878 [2024-11-21 03:40:06.401953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.401964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:18.878 [2024-11-21 03:40:06.401975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:18.878 [2024-11-21 03:40:06.401988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.402116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.402128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:18.878 [2024-11-21 03:40:06.402145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:18.878 [2024-11-21 03:40:06.402153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.402435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.402458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.878 [2024-11-21 03:40:06.402467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:31:18.878 [2024-11-21 03:40:06.402475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.402573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.402584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.878 [2024-11-21 03:40:06.402593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:18.878 [2024-11-21 03:40:06.402618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.402643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.402652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:18.878 [2024-11-21 03:40:06.402666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:18.878 [2024-11-21 03:40:06.402674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.402702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:18.878 [2024-11-21 03:40:06.405563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.405599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.878 [2024-11-21 03:40:06.405611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:31:18.878 [2024-11-21 03:40:06.405620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.405661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.878 [2024-11-21 03:40:06.405671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:18.878 [2024-11-21 03:40:06.405681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:18.878 [2024-11-21 03:40:06.405690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.878 [2024-11-21 03:40:06.405762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:18.878 [2024-11-21 03:40:06.405793] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:18.878 [2024-11-21 03:40:06.405836] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:18.878 [2024-11-21 03:40:06.405860] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:18.878 [2024-11-21 03:40:06.405990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:18.878 [2024-11-21 03:40:06.406005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:18.878 [2024-11-21 03:40:06.406020] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:18.878 [2024-11-21 03:40:06.406033] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:18.878 [2024-11-21 03:40:06.406050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:18.878 [2024-11-21 03:40:06.406060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:18.878 [2024-11-21 03:40:06.406068] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:18.878 [2024-11-21 03:40:06.406078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:18.878 [2024-11-21 03:40:06.406086] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:18.878 [2024-11-21 03:40:06.406094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.879 [2024-11-21 03:40:06.406102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:18.879 [2024-11-21 03:40:06.406110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:31:18.879 [2024-11-21 03:40:06.406118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.879 [2024-11-21 03:40:06.406206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.879 [2024-11-21 03:40:06.406216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:18.879 [2024-11-21 03:40:06.406229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:18.879 [2024-11-21 03:40:06.406238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.879 [2024-11-21 03:40:06.406346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:18.879 [2024-11-21 03:40:06.406361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:18.879 [2024-11-21 03:40:06.406372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:18.879 [2024-11-21 03:40:06.406399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:18.879 [2024-11-21 03:40:06.406422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.879 [2024-11-21 03:40:06.406444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:18.879 [2024-11-21 03:40:06.406453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:18.879 [2024-11-21 03:40:06.406462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.879 [2024-11-21 03:40:06.406470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:18.879 [2024-11-21 03:40:06.406479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:18.879 [2024-11-21 03:40:06.406486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:18.879 [2024-11-21 03:40:06.406509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:18.879 [2024-11-21 03:40:06.406535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:18.879 [2024-11-21 03:40:06.406560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:18.879 [2024-11-21 03:40:06.406584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:18.879 [2024-11-21 03:40:06.406639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:18.879 [2024-11-21 03:40:06.406661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.879 [2024-11-21 03:40:06.406681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:18.879 [2024-11-21 03:40:06.406688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:18.879 [2024-11-21 03:40:06.406695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.879 [2024-11-21 03:40:06.406702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:18.879 [2024-11-21 03:40:06.406710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:18.879 [2024-11-21 03:40:06.406717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:18.879 [2024-11-21 03:40:06.406731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:18.879 [2024-11-21 03:40:06.406737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:18.879 [2024-11-21 03:40:06.406754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:18.879 [2024-11-21 03:40:06.406763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.879 [2024-11-21 03:40:06.406783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:18.879 [2024-11-21 03:40:06.406790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:18.879 [2024-11-21 03:40:06.406798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:18.879 [2024-11-21 03:40:06.406806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:18.879 [2024-11-21 03:40:06.406814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:18.879 [2024-11-21 03:40:06.406821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:18.879 [2024-11-21 03:40:06.406830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:18.879 [2024-11-21 03:40:06.406842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:18.879 [2024-11-21 03:40:06.406859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:18.879 [2024-11-21 03:40:06.406867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:18.879 [2024-11-21 03:40:06.406874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:18.879 [2024-11-21 03:40:06.406882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:18.879 [2024-11-21 03:40:06.406889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:18.879 [2024-11-21 03:40:06.406916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:18.879 [2024-11-21 03:40:06.406924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:18.879 [2024-11-21 03:40:06.406931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:18.879 [2024-11-21 03:40:06.406938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:18.879 [2024-11-21 03:40:06.406981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:18.879 [2024-11-21 03:40:06.406989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.406997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.879 [2024-11-21 03:40:06.407006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:18.879 [2024-11-21 03:40:06.407013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:18.879 [2024-11-21 03:40:06.407021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:18.879 [2024-11-21 03:40:06.407030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.879 [2024-11-21 03:40:06.407038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:18.879 [2024-11-21 03:40:06.407049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:31:18.879 [2024-11-21 03:40:06.407057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.879 [2024-11-21 03:40:06.421472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.879 [2024-11-21 03:40:06.421531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.879 [2024-11-21 03:40:06.421544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.364 ms 00:31:18.879 [2024-11-21 03:40:06.421554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.879 [2024-11-21 03:40:06.421642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.879 [2024-11-21 03:40:06.421658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:18.879 [2024-11-21 03:40:06.421670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:18.879 [2024-11-21 03:40:06.421684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.445655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.445724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.141 [2024-11-21 03:40:06.445743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.907 ms 00:31:19.141 [2024-11-21 03:40:06.445755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.445815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.445830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.141 [2024-11-21 03:40:06.445844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.141 [2024-11-21 03:40:06.445855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.446032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.446054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.141 [2024-11-21 03:40:06.446065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:19.141 [2024-11-21 03:40:06.446076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.446254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.446277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.141 [2024-11-21 03:40:06.446289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:31:19.141 [2024-11-21 03:40:06.446298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.458052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.458100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.141 [2024-11-21 03:40:06.458121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.724 ms 00:31:19.141 [2024-11-21 03:40:06.458130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.458275] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:19.141 [2024-11-21 03:40:06.458292] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:19.141 [2024-11-21 03:40:06.458310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.458320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:19.141 [2024-11-21 03:40:06.458330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:19.141 [2024-11-21 03:40:06.458341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.470715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.470761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:19.141 [2024-11-21 03:40:06.470774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.349 ms 00:31:19.141 [2024-11-21 03:40:06.470782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.470947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.470960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.141 [2024-11-21 03:40:06.470970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:31:19.141 [2024-11-21 03:40:06.470983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.471036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.471052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.141 [2024-11-21 03:40:06.471060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:19.141 [2024-11-21 03:40:06.471069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.471400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.471423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.141 [2024-11-21 03:40:06.471432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:31:19.141 [2024-11-21 03:40:06.471440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.471457] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:19.141 [2024-11-21 03:40:06.471467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.471476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.141 [2024-11-21 03:40:06.471490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:19.141 [2024-11-21 03:40:06.471498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.482533] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.141 [2024-11-21 03:40:06.482731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.482745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.141 [2024-11-21 03:40:06.482759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.214 ms 00:31:19.141 [2024-11-21 03:40:06.482768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.141 [2024-11-21 03:40:06.485430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.141 [2024-11-21 03:40:06.485471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.141 [2024-11-21 03:40:06.485486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:31:19.142 [2024-11-21 03:40:06.485495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.485603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.142 [2024-11-21 03:40:06.485617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.142 [2024-11-21 03:40:06.485627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:19.142 [2024-11-21 03:40:06.485636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.485667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.142 [2024-11-21 03:40:06.485680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.142 [2024-11-21 03:40:06.485694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:19.142 [2024-11-21 03:40:06.485703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.485745] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.142 [2024-11-21 03:40:06.485756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.142 [2024-11-21 03:40:06.485764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.142 [2024-11-21 03:40:06.485773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:19.142 [2024-11-21 03:40:06.485785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.493539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.142 [2024-11-21 03:40:06.493594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.142 [2024-11-21 03:40:06.493606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:31:19.142 [2024-11-21 03:40:06.493615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.493715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.142 [2024-11-21 03:40:06.493730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.142 [2024-11-21 03:40:06.493741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:19.142 [2024-11-21 03:40:06.493749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.142 [2024-11-21 03:40:06.495425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.474 ms, result 0 00:31:20.527  [2024-11-21T03:40:09.037Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-21T03:40:09.981Z] Copying: 29/1024 [MB] (17 MBps) [2024-11-21T03:40:10.924Z] Copying: 43/1024 [MB] (13 MBps) [2024-11-21T03:40:11.866Z] Copying: 56/1024 [MB] (13 MBps) [2024-11-21T03:40:12.810Z] Copying: 69/1024 [MB] (12 MBps) [2024-11-21T03:40:13.755Z] Copying: 84/1024 [MB] (14 MBps) [2024-11-21T03:40:14.699Z] Copying: 97/1024 [MB] (13 MBps) [2024-11-21T03:40:16.085Z] Copying: 118/1024 [MB] (20 MBps) [2024-11-21T03:40:17.030Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-21T03:40:17.973Z] Copying: 158/1024 [MB] (16 MBps) [2024-11-21T03:40:18.917Z] Copying: 176/1024 [MB] (18 MBps) [2024-11-21T03:40:19.861Z] Copying: 196/1024 [MB] (20 MBps) [2024-11-21T03:40:20.805Z] Copying: 214/1024 [MB] (17 MBps) [2024-11-21T03:40:21.748Z] Copying: 237/1024 [MB] (23 MBps) [2024-11-21T03:40:22.691Z] Copying: 265/1024 [MB] (28 MBps) [2024-11-21T03:40:24.078Z] Copying: 290/1024 [MB] (24 MBps) [2024-11-21T03:40:25.023Z] Copying: 311/1024 [MB] (20 MBps) [2024-11-21T03:40:25.970Z] Copying: 333/1024 [MB] (22 MBps) [2024-11-21T03:40:26.915Z] Copying: 359/1024 [MB] (25 MBps) [2024-11-21T03:40:27.856Z] Copying: 373/1024 [MB] (13 MBps) [2024-11-21T03:40:28.798Z] Copying: 389/1024 [MB] (16 MBps) [2024-11-21T03:40:29.741Z] Copying: 407/1024 [MB] (18 MBps) [2024-11-21T03:40:31.130Z] Copying: 429/1024 [MB] (22 MBps) [2024-11-21T03:40:31.703Z] Copying: 457/1024 [MB] (27 MBps) [2024-11-21T03:40:32.702Z] Copying: 475/1024 [MB] (18 MBps) [2024-11-21T03:40:34.093Z] Copying: 490/1024 [MB] (14 MBps) [2024-11-21T03:40:35.036Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-21T03:40:35.978Z] Copying: 517/1024 [MB] (16 MBps) [2024-11-21T03:40:36.920Z] Copying: 531/1024 [MB] (13 MBps) [2024-11-21T03:40:37.861Z] Copying: 554/1024 [MB] (22 MBps) [2024-11-21T03:40:38.805Z] Copying: 573/1024 [MB] (19 MBps) [2024-11-21T03:40:39.769Z] Copying: 595/1024 [MB] (21 MBps) [2024-11-21T03:40:40.715Z] Copying: 614/1024 [MB] (18 MBps) [2024-11-21T03:40:42.106Z] Copying: 637/1024 [MB] (23 MBps) [2024-11-21T03:40:43.052Z] Copying: 660/1024 [MB] (23 MBps) [2024-11-21T03:40:43.998Z] Copying: 677/1024 [MB] (16 MBps) [2024-11-21T03:40:44.942Z] Copying: 691/1024 [MB] (13 MBps) [2024-11-21T03:40:45.886Z] Copying: 708/1024 [MB] (16 MBps) [2024-11-21T03:40:46.829Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-21T03:40:47.771Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-21T03:40:48.714Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-21T03:40:50.115Z] Copying: 759/1024 [MB] (14 MBps) [2024-11-21T03:40:51.058Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-21T03:40:52.001Z] Copying: 782/1024 [MB] (12 MBps) [2024-11-21T03:40:52.946Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-21T03:40:53.891Z] Copying: 809/1024 [MB] (16 MBps) [2024-11-21T03:40:54.834Z] Copying: 825/1024 [MB] (15 MBps) [2024-11-21T03:40:55.778Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-21T03:40:56.722Z] Copying: 854/1024 [MB] (17 MBps) [2024-11-21T03:40:58.110Z] Copying: 865/1024 [MB] (11 MBps) [2024-11-21T03:40:59.054Z] Copying: 877/1024 [MB] (12 MBps) [2024-11-21T03:40:59.999Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-21T03:41:00.941Z] Copying: 904/1024 [MB] (14 MBps) [2024-11-21T03:41:01.883Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-21T03:41:02.827Z] Copying: 931/1024 [MB] (16 MBps) [2024-11-21T03:41:03.771Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-21T03:41:04.797Z] Copying: 954/1024 [MB] (11 MBps) [2024-11-21T03:41:05.739Z] Copying: 965/1024 [MB] (11 MBps) [2024-11-21T03:41:07.124Z] Copying: 976/1024 [MB] (10 MBps) [2024-11-21T03:41:07.696Z] Copying: 991/1024 [MB] (15 MBps) [2024-11-21T03:41:08.267Z] Copying: 1019/1024 [MB] (28 MBps) [2024-11-21T03:41:08.530Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 03:41:08.355892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.965 [2024-11-21 03:41:08.356284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:20.965 [2024-11-21 03:41:08.356385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:20.965 [2024-11-21 03:41:08.356419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.965 [2024-11-21 03:41:08.356478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:20.965 [2024-11-21 03:41:08.357302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.965 [2024-11-21 03:41:08.357458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:20.965 [2024-11-21 03:41:08.357532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:32:20.965 [2024-11-21 03:41:08.357564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.965 [2024-11-21 03:41:08.357891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.965 [2024-11-21 03:41:08.357950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:20.965 [2024-11-21 03:41:08.357979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:20.965 [2024-11-21 03:41:08.358004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.965 [2024-11-21 03:41:08.358063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.965 [2024-11-21 03:41:08.358091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:20.965 [2024-11-21 03:41:08.358117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:20.965 [2024-11-21 03:41:08.358275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.965 [2024-11-21 03:41:08.358372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.965 [2024-11-21 03:41:08.358387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:20.965 [2024-11-21 03:41:08.358399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:20.965 [2024-11-21 03:41:08.358409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.965 [2024-11-21 03:41:08.358428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:20.965 [2024-11-21 03:41:08.358443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.358868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:20.965 [2024-11-21 03:41:08.359587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.359996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:20.966 [2024-11-21 03:41:08.360069] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:20.966 [2024-11-21 03:41:08.360090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb65078-7682-42ba-84f4-487ad2767c85 00:32:20.966 [2024-11-21 03:41:08.360102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:20.966 [2024-11-21 03:41:08.360110] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:20.966 [2024-11-21 03:41:08.360119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:20.966 [2024-11-21 03:41:08.360128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:20.966 [2024-11-21 03:41:08.360139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:20.966 [2024-11-21 03:41:08.360147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:20.966 [2024-11-21 03:41:08.360155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:20.966 [2024-11-21 03:41:08.360162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:20.966 [2024-11-21 03:41:08.360169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:20.966 [2024-11-21 03:41:08.360177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.966 [2024-11-21 03:41:08.360186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:20.966 [2024-11-21 03:41:08.360195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:32:20.966 [2024-11-21 03:41:08.360202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.362862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.966 [2024-11-21 03:41:08.362939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:20.966 [2024-11-21 03:41:08.362964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:32:20.966 [2024-11-21 03:41:08.362973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.363099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.966 [2024-11-21 03:41:08.363108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:20.966 [2024-11-21 03:41:08.363121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:20.966 [2024-11-21 03:41:08.363129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.371723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.371777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:20.966 [2024-11-21 03:41:08.371788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.371796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.371854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.371862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:20.966 [2024-11-21 03:41:08.371875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.371884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.371967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.371979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:20.966 [2024-11-21 03:41:08.371988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.371995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.372013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.372023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:20.966 [2024-11-21 03:41:08.372030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.372041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.387699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.387754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:20.966 [2024-11-21 03:41:08.387766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.387775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.399005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.399056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:20.966 [2024-11-21 03:41:08.399070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.399088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.399140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.399150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:20.966 [2024-11-21 03:41:08.399159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.966 [2024-11-21 03:41:08.399167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.966 [2024-11-21 03:41:08.399203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.966 [2024-11-21 03:41:08.399212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:20.966 [2024-11-21 03:41:08.399220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.967 [2024-11-21 03:41:08.399228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.967 [2024-11-21 03:41:08.399289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.967 [2024-11-21 03:41:08.399299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:20.967 [2024-11-21 03:41:08.399307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.967 [2024-11-21 03:41:08.399316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.967 [2024-11-21 03:41:08.399347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.967 [2024-11-21 03:41:08.399356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:20.967 [2024-11-21 03:41:08.399364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.967 [2024-11-21 03:41:08.399372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.967 [2024-11-21 03:41:08.399414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.967 [2024-11-21 03:41:08.399424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:20.967 [2024-11-21 03:41:08.399432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.967 [2024-11-21 03:41:08.399440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.967 [2024-11-21 03:41:08.399483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.967 [2024-11-21 03:41:08.399501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:20.967 [2024-11-21 03:41:08.399510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.967 [2024-11-21 03:41:08.399518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.967 [2024-11-21 03:41:08.399655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.734 ms, result 0 00:32:21.228 00:32:21.228 00:32:21.228 03:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:23.776 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:23.776 03:41:10 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:23.776 [2024-11-21 03:41:10.927859] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:32:23.776 [2024-11-21 03:41:10.928024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97528 ] 00:32:23.776 [2024-11-21 03:41:11.063380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.776 [2024-11-21 03:41:11.094075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.776 [2024-11-21 03:41:11.122685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.776 [2024-11-21 03:41:11.237341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:23.776 [2024-11-21 03:41:11.237425] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.040 [2024-11-21 03:41:11.398413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.398483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:24.040 [2024-11-21 03:41:11.398500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:24.040 [2024-11-21 03:41:11.398508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.398566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.398581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.040 [2024-11-21 03:41:11.398591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:24.040 [2024-11-21 03:41:11.398603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.398627] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:24.040 [2024-11-21 03:41:11.398965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:24.040 [2024-11-21 03:41:11.398989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.398998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.040 [2024-11-21 03:41:11.399011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:32:24.040 [2024-11-21 03:41:11.399020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.399308] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:24.040 [2024-11-21 03:41:11.399348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.399360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:24.040 [2024-11-21 03:41:11.399374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:24.040 [2024-11-21 03:41:11.399388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.399517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.399530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:24.040 [2024-11-21 03:41:11.399542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:24.040 [2024-11-21 03:41:11.399551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.399814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.399826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.040 [2024-11-21 03:41:11.399836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:32:24.040 [2024-11-21 03:41:11.399843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.399953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.399986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.040 [2024-11-21 03:41:11.399995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:32:24.040 [2024-11-21 03:41:11.400003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.400030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.400044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:24.040 [2024-11-21 03:41:11.400052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:24.040 [2024-11-21 03:41:11.400060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.400082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:24.040 [2024-11-21 03:41:11.402238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.402273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.040 [2024-11-21 03:41:11.402284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:32:24.040 [2024-11-21 03:41:11.402298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.402332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.402341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:24.040 [2024-11-21 03:41:11.402350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:24.040 [2024-11-21 03:41:11.402357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.402409] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:24.040 [2024-11-21 03:41:11.402435] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:24.040 [2024-11-21 03:41:11.402471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:24.040 [2024-11-21 03:41:11.402491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:24.040 [2024-11-21 03:41:11.402602] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:24.040 [2024-11-21 03:41:11.402613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:24.040 [2024-11-21 03:41:11.402624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:24.040 [2024-11-21 03:41:11.402637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:24.040 [2024-11-21 03:41:11.402649] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:24.040 [2024-11-21 03:41:11.402665] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:24.040 [2024-11-21 03:41:11.402677] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:24.040 [2024-11-21 03:41:11.402685] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:24.040 [2024-11-21 03:41:11.402692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:24.040 [2024-11-21 03:41:11.402700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.402707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:24.040 [2024-11-21 03:41:11.402715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:24.040 [2024-11-21 03:41:11.402723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.402805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.040 [2024-11-21 03:41:11.402813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:24.040 [2024-11-21 03:41:11.402824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:24.040 [2024-11-21 03:41:11.402831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.040 [2024-11-21 03:41:11.402973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:24.040 [2024-11-21 03:41:11.402989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:24.040 [2024-11-21 03:41:11.403000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.040 [2024-11-21 03:41:11.403014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.040 [2024-11-21 03:41:11.403023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:24.040 [2024-11-21 03:41:11.403031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:24.040 [2024-11-21 03:41:11.403039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:24.040 [2024-11-21 03:41:11.403047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:24.041 [2024-11-21 03:41:11.403055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.041 [2024-11-21 03:41:11.403077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:24.041 [2024-11-21 03:41:11.403084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:24.041 [2024-11-21 03:41:11.403092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.041 [2024-11-21 03:41:11.403100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:24.041 [2024-11-21 03:41:11.403107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:24.041 [2024-11-21 03:41:11.403115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:24.041 [2024-11-21 03:41:11.403134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:24.041 [2024-11-21 03:41:11.403157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:24.041 [2024-11-21 03:41:11.403179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:24.041 [2024-11-21 03:41:11.403201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:24.041 [2024-11-21 03:41:11.403223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:24.041 [2024-11-21 03:41:11.403247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.041 [2024-11-21 03:41:11.403269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:24.041 [2024-11-21 03:41:11.403287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:24.041 [2024-11-21 03:41:11.403295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.041 [2024-11-21 03:41:11.403303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:24.041 [2024-11-21 03:41:11.403310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:24.041 [2024-11-21 03:41:11.403318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:24.041 [2024-11-21 03:41:11.403334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:24.041 [2024-11-21 03:41:11.403341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:24.041 [2024-11-21 03:41:11.403358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:24.041 [2024-11-21 03:41:11.403367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.041 [2024-11-21 03:41:11.403386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:24.041 [2024-11-21 03:41:11.403393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:24.041 [2024-11-21 03:41:11.403402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:24.041 [2024-11-21 03:41:11.403409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:24.041 [2024-11-21 03:41:11.403416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:24.041 [2024-11-21 03:41:11.403422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:24.041 [2024-11-21 03:41:11.403430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:24.041 [2024-11-21 03:41:11.403440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:24.041 [2024-11-21 03:41:11.403456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:24.041 [2024-11-21 03:41:11.403463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:24.041 [2024-11-21 03:41:11.403470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:24.041 [2024-11-21 03:41:11.403477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:24.041 [2024-11-21 03:41:11.403484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:24.041 [2024-11-21 03:41:11.403491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:24.041 [2024-11-21 03:41:11.403498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:24.041 [2024-11-21 03:41:11.403505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:24.041 [2024-11-21 03:41:11.403512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:24.041 [2024-11-21 03:41:11.403551] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:24.041 [2024-11-21 03:41:11.403560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.041 [2024-11-21 03:41:11.403576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:24.041 [2024-11-21 03:41:11.403584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:24.041 [2024-11-21 03:41:11.403591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:24.041 [2024-11-21 03:41:11.403599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.403606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:24.041 [2024-11-21 03:41:11.403618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:32:24.041 [2024-11-21 03:41:11.403624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.413438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.413492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.041 [2024-11-21 03:41:11.413504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.772 ms 00:32:24.041 [2024-11-21 03:41:11.413514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.413596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.413608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:24.041 [2024-11-21 03:41:11.413617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:24.041 [2024-11-21 03:41:11.413634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.444096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.444197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.041 [2024-11-21 03:41:11.444232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.400 ms 00:32:24.041 [2024-11-21 03:41:11.444256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.444354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.444383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.041 [2024-11-21 03:41:11.444409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:24.041 [2024-11-21 03:41:11.444431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.444671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.444738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.041 [2024-11-21 03:41:11.444762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:32:24.041 [2024-11-21 03:41:11.444781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.445129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.445167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.041 [2024-11-21 03:41:11.445189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:32:24.041 [2024-11-21 03:41:11.445209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.454114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.041 [2024-11-21 03:41:11.454163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.041 [2024-11-21 03:41:11.454181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.858 ms 00:32:24.041 [2024-11-21 03:41:11.454189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.041 [2024-11-21 03:41:11.454297] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:24.041 [2024-11-21 03:41:11.454311] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:24.042 [2024-11-21 03:41:11.454329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.454338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:24.042 [2024-11-21 03:41:11.454347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:24.042 [2024-11-21 03:41:11.454360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.466836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.466911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:24.042 [2024-11-21 03:41:11.466928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:32:24.042 [2024-11-21 03:41:11.466940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.467093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.467104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:24.042 [2024-11-21 03:41:11.467119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:32:24.042 [2024-11-21 03:41:11.467133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.467179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.467192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:24.042 [2024-11-21 03:41:11.467201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:24.042 [2024-11-21 03:41:11.467208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.467528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.467551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:24.042 [2024-11-21 03:41:11.467559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:32:24.042 [2024-11-21 03:41:11.467567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.467583] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:24.042 [2024-11-21 03:41:11.467593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.467601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:24.042 [2024-11-21 03:41:11.467612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:24.042 [2024-11-21 03:41:11.467619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.477272] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:24.042 [2024-11-21 03:41:11.477447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.477464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:24.042 [2024-11-21 03:41:11.477475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.809 ms 00:32:24.042 [2024-11-21 03:41:11.477482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.479991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.480036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:24.042 [2024-11-21 03:41:11.480046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:32:24.042 [2024-11-21 03:41:11.480053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.480148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.480159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:24.042 [2024-11-21 03:41:11.480168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:24.042 [2024-11-21 03:41:11.480182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.480216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.480225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:24.042 [2024-11-21 03:41:11.480236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.042 [2024-11-21 03:41:11.480244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.480278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:24.042 [2024-11-21 03:41:11.480288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.480296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:24.042 [2024-11-21 03:41:11.480304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:24.042 [2024-11-21 03:41:11.480311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.486582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.486642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:24.042 [2024-11-21 03:41:11.486653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:32:24.042 [2024-11-21 03:41:11.486661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.486752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.042 [2024-11-21 03:41:11.486767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:24.042 [2024-11-21 03:41:11.486775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:24.042 [2024-11-21 03:41:11.486788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.042 [2024-11-21 03:41:11.488597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.731 ms, result 0 00:32:24.986  [2024-11-21T03:41:13.940Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-21T03:41:14.513Z] Copying: 26/1024 [MB] (13 MBps) [2024-11-21T03:41:15.900Z] Copying: 45/1024 [MB] (19 MBps) [2024-11-21T03:41:16.844Z] Copying: 71/1024 [MB] (26 MBps) [2024-11-21T03:41:17.788Z] Copying: 94/1024 [MB] (23 MBps) [2024-11-21T03:41:18.732Z] Copying: 133/1024 [MB] (38 MBps) [2024-11-21T03:41:19.689Z] Copying: 156/1024 [MB] (22 MBps) [2024-11-21T03:41:20.635Z] Copying: 174/1024 [MB] (18 MBps) [2024-11-21T03:41:21.578Z] Copying: 189/1024 [MB] (15 MBps) [2024-11-21T03:41:22.522Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-21T03:41:23.911Z] Copying: 213/1024 [MB] (12 MBps) [2024-11-21T03:41:24.855Z] Copying: 230/1024 [MB] (16 MBps) [2024-11-21T03:41:25.799Z] Copying: 251/1024 [MB] (21 MBps) [2024-11-21T03:41:26.741Z] Copying: 272/1024 [MB] (21 MBps) [2024-11-21T03:41:27.685Z] Copying: 295/1024 [MB] (22 MBps) [2024-11-21T03:41:28.628Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-21T03:41:29.571Z] Copying: 330/1024 [MB] (24 MBps) [2024-11-21T03:41:30.514Z] Copying: 345/1024 [MB] (14 MBps) [2024-11-21T03:41:31.903Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-21T03:41:32.848Z] Copying: 378/1024 [MB] (16 MBps) [2024-11-21T03:41:33.793Z] Copying: 393/1024 [MB] (15 MBps) [2024-11-21T03:41:34.737Z] Copying: 425/1024 [MB] (31 MBps) [2024-11-21T03:41:35.681Z] Copying: 444/1024 [MB] (19 MBps) [2024-11-21T03:41:36.680Z] Copying: 472/1024 [MB] (27 MBps) [2024-11-21T03:41:37.631Z] Copying: 488/1024 [MB] (16 MBps) [2024-11-21T03:41:38.575Z] Copying: 513/1024 [MB] (24 MBps) [2024-11-21T03:41:39.518Z] Copying: 526/1024 [MB] (13 MBps) [2024-11-21T03:41:40.907Z] Copying: 538/1024 [MB] (11 MBps) [2024-11-21T03:41:41.850Z] Copying: 552/1024 [MB] (14 MBps) [2024-11-21T03:41:42.795Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-21T03:41:43.739Z] Copying: 581/1024 [MB] (12 MBps) [2024-11-21T03:41:44.683Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-21T03:41:45.628Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-21T03:41:46.575Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-21T03:41:47.517Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-21T03:41:48.902Z] Copying: 643/1024 [MB] (17 MBps) [2024-11-21T03:41:49.847Z] Copying: 658/1024 [MB] (15 MBps) [2024-11-21T03:41:50.788Z] Copying: 672/1024 [MB] (14 MBps) [2024-11-21T03:41:51.732Z] Copying: 712/1024 [MB] (39 MBps) [2024-11-21T03:41:52.676Z] Copying: 741/1024 [MB] (29 MBps) [2024-11-21T03:41:53.621Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-21T03:41:54.565Z] Copying: 775/1024 [MB] (20 MBps) [2024-11-21T03:41:55.507Z] Copying: 791/1024 [MB] (15 MBps) [2024-11-21T03:41:56.895Z] Copying: 809/1024 [MB] (18 MBps) [2024-11-21T03:41:57.841Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-21T03:41:58.785Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-21T03:41:59.728Z] Copying: 857/1024 [MB] (18 MBps) [2024-11-21T03:42:00.672Z] Copying: 877/1024 [MB] (20 MBps) [2024-11-21T03:42:01.616Z] Copying: 895/1024 [MB] (17 MBps) [2024-11-21T03:42:02.560Z] Copying: 914/1024 [MB] (19 MBps) [2024-11-21T03:42:03.505Z] Copying: 924/1024 [MB] (10 MBps) [2024-11-21T03:42:04.894Z] Copying: 935/1024 [MB] (10 MBps) [2024-11-21T03:42:05.838Z] Copying: 945/1024 [MB] (10 MBps) [2024-11-21T03:42:06.779Z] Copying: 966/1024 [MB] (21 MBps) [2024-11-21T03:42:07.720Z] Copying: 1011/1024 [MB] (45 MBps) [2024-11-21T03:42:08.729Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-21T03:42:08.729Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 03:42:08.419294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.164 [2024-11-21 03:42:08.419371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:21.164 [2024-11-21 03:42:08.419389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:21.164 [2024-11-21 03:42:08.419399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.164 [2024-11-21 03:42:08.421669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:21.164 [2024-11-21 03:42:08.424826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.164 [2024-11-21 03:42:08.424881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:21.164 [2024-11-21 03:42:08.424894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:33:21.164 [2024-11-21 03:42:08.424917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.164 [2024-11-21 03:42:08.434125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.164 [2024-11-21 03:42:08.434176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:21.164 [2024-11-21 03:42:08.434188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.314 ms 00:33:21.164 [2024-11-21 03:42:08.434197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.164 [2024-11-21 03:42:08.434228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.164 [2024-11-21 03:42:08.434238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:21.164 [2024-11-21 03:42:08.434256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:21.164 [2024-11-21 03:42:08.434265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.164 [2024-11-21 03:42:08.434324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.164 [2024-11-21 03:42:08.434335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:21.164 [2024-11-21 03:42:08.434347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:21.164 [2024-11-21 03:42:08.434356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.164 [2024-11-21 03:42:08.434370] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:21.164 [2024-11-21 03:42:08.434382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:33:21.164 [2024-11-21 03:42:08.434392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:21.164 [2024-11-21 03:42:08.434795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.434994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:21.165 [2024-11-21 03:42:08.435268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:21.165 [2024-11-21 03:42:08.435280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb65078-7682-42ba-84f4-487ad2767c85 00:33:21.165 [2024-11-21 03:42:08.435289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:33:21.165 [2024-11-21 03:42:08.435296] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:33:21.165 [2024-11-21 03:42:08.435304] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:33:21.165 [2024-11-21 03:42:08.435317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:21.165 [2024-11-21 03:42:08.435328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:21.165 [2024-11-21 03:42:08.435344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:21.165 [2024-11-21 03:42:08.435352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:21.165 [2024-11-21 03:42:08.435358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:21.165 [2024-11-21 03:42:08.435365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:21.165 [2024-11-21 03:42:08.435371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.165 [2024-11-21 03:42:08.435379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:21.165 [2024-11-21 03:42:08.435388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:33:21.165 [2024-11-21 03:42:08.435396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.437712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.165 [2024-11-21 03:42:08.437754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:21.165 [2024-11-21 03:42:08.437775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:33:21.165 [2024-11-21 03:42:08.437787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.437928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.165 [2024-11-21 03:42:08.437939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:21.165 [2024-11-21 03:42:08.437949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:33:21.165 [2024-11-21 03:42:08.437958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.445350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.165 [2024-11-21 03:42:08.445396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:21.165 [2024-11-21 03:42:08.445408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.165 [2024-11-21 03:42:08.445416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.445478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.165 [2024-11-21 03:42:08.445493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:21.165 [2024-11-21 03:42:08.445502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.165 [2024-11-21 03:42:08.445511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.445563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.165 [2024-11-21 03:42:08.445573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:21.165 [2024-11-21 03:42:08.445584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.165 [2024-11-21 03:42:08.445592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.445609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.165 [2024-11-21 03:42:08.445617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:21.165 [2024-11-21 03:42:08.445626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.165 [2024-11-21 03:42:08.445633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.458914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.165 [2024-11-21 03:42:08.458972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:21.165 [2024-11-21 03:42:08.458988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.165 [2024-11-21 03:42:08.458996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.165 [2024-11-21 03:42:08.470092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:21.166 [2024-11-21 03:42:08.470167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:21.166 [2024-11-21 03:42:08.470243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:21.166 [2024-11-21 03:42:08.470311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:21.166 [2024-11-21 03:42:08.470396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:21.166 [2024-11-21 03:42:08.470455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:21.166 [2024-11-21 03:42:08.470519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.166 [2024-11-21 03:42:08.470584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:21.166 [2024-11-21 03:42:08.470593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.166 [2024-11-21 03:42:08.470600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.166 [2024-11-21 03:42:08.470736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 52.712 ms, result 0 00:33:22.109 00:33:22.109 00:33:22.109 03:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:22.109 [2024-11-21 03:42:09.502431] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:33:22.109 [2024-11-21 03:42:09.502578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98133 ] 00:33:22.109 [2024-11-21 03:42:09.639885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:22.109 [2024-11-21 03:42:09.668956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.370 [2024-11-21 03:42:09.697924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.370 [2024-11-21 03:42:09.812813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:22.370 [2024-11-21 03:42:09.812913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:22.632 [2024-11-21 03:42:09.974268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.632 [2024-11-21 03:42:09.974325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:22.632 [2024-11-21 03:42:09.974340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:22.632 [2024-11-21 03:42:09.974348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.632 [2024-11-21 03:42:09.974406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.632 [2024-11-21 03:42:09.974421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:22.632 [2024-11-21 03:42:09.974430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:22.632 [2024-11-21 03:42:09.974438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.632 [2024-11-21 03:42:09.974461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:22.632 [2024-11-21 03:42:09.974877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:22.632 [2024-11-21 03:42:09.974935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.632 [2024-11-21 03:42:09.974944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:22.632 [2024-11-21 03:42:09.974961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:33:22.632 [2024-11-21 03:42:09.974969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.632 [2024-11-21 03:42:09.975296] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:22.632 [2024-11-21 03:42:09.975321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.632 [2024-11-21 03:42:09.975335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:22.632 [2024-11-21 03:42:09.975344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:22.633 [2024-11-21 03:42:09.975359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.975418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.975428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:22.633 [2024-11-21 03:42:09.975437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:22.633 [2024-11-21 03:42:09.975445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.975698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.975710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:22.633 [2024-11-21 03:42:09.975721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:33:22.633 [2024-11-21 03:42:09.975732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.975819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.975829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:22.633 [2024-11-21 03:42:09.975838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:22.633 [2024-11-21 03:42:09.975845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.975868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.975876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:22.633 [2024-11-21 03:42:09.975888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:22.633 [2024-11-21 03:42:09.975915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.975942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:22.633 [2024-11-21 03:42:09.978009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.978047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:22.633 [2024-11-21 03:42:09.978062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:33:22.633 [2024-11-21 03:42:09.978070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.978109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.978118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:22.633 [2024-11-21 03:42:09.978126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:22.633 [2024-11-21 03:42:09.978134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.978190] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:22.633 [2024-11-21 03:42:09.978215] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:22.633 [2024-11-21 03:42:09.978255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:22.633 [2024-11-21 03:42:09.978271] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:22.633 [2024-11-21 03:42:09.978375] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:22.633 [2024-11-21 03:42:09.978385] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:22.633 [2024-11-21 03:42:09.978398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:22.633 [2024-11-21 03:42:09.978408] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978420] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:22.633 [2024-11-21 03:42:09.978437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:22.633 [2024-11-21 03:42:09.978450] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:22.633 [2024-11-21 03:42:09.978458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:22.633 [2024-11-21 03:42:09.978466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.978477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:22.633 [2024-11-21 03:42:09.978484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:33:22.633 [2024-11-21 03:42:09.978492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.978574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.633 [2024-11-21 03:42:09.978591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:22.633 [2024-11-21 03:42:09.978602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:22.633 [2024-11-21 03:42:09.978610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.633 [2024-11-21 03:42:09.978714] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:22.633 [2024-11-21 03:42:09.978730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:22.633 [2024-11-21 03:42:09.978740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:22.633 [2024-11-21 03:42:09.978771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:22.633 [2024-11-21 03:42:09.978797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:22.633 [2024-11-21 03:42:09.978822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:22.633 [2024-11-21 03:42:09.978831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:22.633 [2024-11-21 03:42:09.978839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:22.633 [2024-11-21 03:42:09.978847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:22.633 [2024-11-21 03:42:09.978855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:22.633 [2024-11-21 03:42:09.978879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:22.633 [2024-11-21 03:42:09.978920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:22.633 [2024-11-21 03:42:09.978946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:22.633 [2024-11-21 03:42:09.978973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:22.633 [2024-11-21 03:42:09.978980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.633 [2024-11-21 03:42:09.978988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:22.633 [2024-11-21 03:42:09.978996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.633 [2024-11-21 03:42:09.979011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:22.633 [2024-11-21 03:42:09.979019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.633 [2024-11-21 03:42:09.979035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:22.633 [2024-11-21 03:42:09.979042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:22.633 [2024-11-21 03:42:09.979057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:22.633 [2024-11-21 03:42:09.979064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:22.633 [2024-11-21 03:42:09.979072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:22.633 [2024-11-21 03:42:09.979085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:22.633 [2024-11-21 03:42:09.979094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:22.633 [2024-11-21 03:42:09.979102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:22.633 [2024-11-21 03:42:09.979119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:22.633 [2024-11-21 03:42:09.979127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:22.633 [2024-11-21 03:42:09.979144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:22.633 [2024-11-21 03:42:09.979153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:22.633 [2024-11-21 03:42:09.979166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.633 [2024-11-21 03:42:09.979174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:22.633 [2024-11-21 03:42:09.979180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:22.633 [2024-11-21 03:42:09.979187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:22.633 [2024-11-21 03:42:09.979195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:22.633 [2024-11-21 03:42:09.979201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:22.633 [2024-11-21 03:42:09.979207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:22.633 [2024-11-21 03:42:09.979218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:22.634 [2024-11-21 03:42:09.979229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:22.634 [2024-11-21 03:42:09.979244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:22.634 [2024-11-21 03:42:09.979252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:22.634 [2024-11-21 03:42:09.979259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:22.634 [2024-11-21 03:42:09.979266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:22.634 [2024-11-21 03:42:09.979273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:22.634 [2024-11-21 03:42:09.979279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:22.634 [2024-11-21 03:42:09.979286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:22.634 [2024-11-21 03:42:09.979293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:22.634 [2024-11-21 03:42:09.979300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:22.634 [2024-11-21 03:42:09.979340] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:22.634 [2024-11-21 03:42:09.979350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:22.634 [2024-11-21 03:42:09.979366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:22.634 [2024-11-21 03:42:09.979373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:22.634 [2024-11-21 03:42:09.979381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:22.634 [2024-11-21 03:42:09.979388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:09.979396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:22.634 [2024-11-21 03:42:09.979404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:33:22.634 [2024-11-21 03:42:09.979411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:09.989400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:09.989442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:22.634 [2024-11-21 03:42:09.989454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.946 ms 00:33:22.634 [2024-11-21 03:42:09.989463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:09.989545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:09.989555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:22.634 [2024-11-21 03:42:09.989565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:22.634 [2024-11-21 03:42:09.989574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.009263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.009313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:22.634 [2024-11-21 03:42:10.009333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.634 ms 00:33:22.634 [2024-11-21 03:42:10.009343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.009391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.009402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:22.634 [2024-11-21 03:42:10.009412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:22.634 [2024-11-21 03:42:10.009421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.009530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.009547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:22.634 [2024-11-21 03:42:10.009557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:22.634 [2024-11-21 03:42:10.009566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.009704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.009725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:22.634 [2024-11-21 03:42:10.009734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:33:22.634 [2024-11-21 03:42:10.009744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.018214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.018262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:22.634 [2024-11-21 03:42:10.018278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:33:22.634 [2024-11-21 03:42:10.018286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.018412] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:22.634 [2024-11-21 03:42:10.018429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:22.634 [2024-11-21 03:42:10.018441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.018458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:22.634 [2024-11-21 03:42:10.018469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:22.634 [2024-11-21 03:42:10.018482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.031667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.031714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:22.634 [2024-11-21 03:42:10.031726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.162 ms 00:33:22.634 [2024-11-21 03:42:10.031736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.031874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.031891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:22.634 [2024-11-21 03:42:10.031912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:33:22.634 [2024-11-21 03:42:10.031924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.031979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.031993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:22.634 [2024-11-21 03:42:10.032008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:22.634 [2024-11-21 03:42:10.032016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.032338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.032357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:22.634 [2024-11-21 03:42:10.032366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:33:22.634 [2024-11-21 03:42:10.032374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.032389] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:22.634 [2024-11-21 03:42:10.032404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.032418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:22.634 [2024-11-21 03:42:10.032429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:22.634 [2024-11-21 03:42:10.032438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.042297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:22.634 [2024-11-21 03:42:10.042529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.042563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:22.634 [2024-11-21 03:42:10.042579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.073 ms 00:33:22.634 [2024-11-21 03:42:10.042592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.045289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.045323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:22.634 [2024-11-21 03:42:10.045336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:33:22.634 [2024-11-21 03:42:10.045349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.045438] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:22.634 [2024-11-21 03:42:10.046139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.046160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:22.634 [2024-11-21 03:42:10.046171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:33:22.634 [2024-11-21 03:42:10.046185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.046214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.046224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:22.634 [2024-11-21 03:42:10.046238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:22.634 [2024-11-21 03:42:10.046247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.046287] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:22.634 [2024-11-21 03:42:10.046301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.634 [2024-11-21 03:42:10.046309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:22.634 [2024-11-21 03:42:10.046318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:22.634 [2024-11-21 03:42:10.046329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.634 [2024-11-21 03:42:10.052711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.635 [2024-11-21 03:42:10.052759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:22.635 [2024-11-21 03:42:10.052771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:33:22.635 [2024-11-21 03:42:10.052780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.635 [2024-11-21 03:42:10.052868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.635 [2024-11-21 03:42:10.052879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:22.635 [2024-11-21 03:42:10.052889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:22.635 [2024-11-21 03:42:10.052921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.635 [2024-11-21 03:42:10.054363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.608 ms, result 0 00:33:24.021  [2024-11-21T03:42:12.528Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-21T03:42:13.472Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-21T03:42:14.415Z] Copying: 44/1024 [MB] (14 MBps) [2024-11-21T03:42:15.355Z] Copying: 61/1024 [MB] (16 MBps) [2024-11-21T03:42:16.296Z] Copying: 77/1024 [MB] (15 MBps) [2024-11-21T03:42:17.679Z] Copying: 88/1024 [MB] (11 MBps) [2024-11-21T03:42:18.250Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-21T03:42:19.632Z] Copying: 117/1024 [MB] (15 MBps) [2024-11-21T03:42:20.575Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-21T03:42:21.519Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-21T03:42:22.461Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-21T03:42:23.405Z] Copying: 173/1024 [MB] (22 MBps) [2024-11-21T03:42:24.348Z] Copying: 194/1024 [MB] (21 MBps) [2024-11-21T03:42:25.293Z] Copying: 209/1024 [MB] (15 MBps) [2024-11-21T03:42:26.682Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-21T03:42:27.255Z] Copying: 248/1024 [MB] (20 MBps) [2024-11-21T03:42:28.643Z] Copying: 267/1024 [MB] (18 MBps) [2024-11-21T03:42:29.588Z] Copying: 293/1024 [MB] (25 MBps) [2024-11-21T03:42:30.532Z] Copying: 313/1024 [MB] (20 MBps) [2024-11-21T03:42:31.477Z] Copying: 331/1024 [MB] (17 MBps) [2024-11-21T03:42:32.421Z] Copying: 345/1024 [MB] (13 MBps) [2024-11-21T03:42:33.366Z] Copying: 362/1024 [MB] (17 MBps) [2024-11-21T03:42:34.308Z] Copying: 378/1024 [MB] (15 MBps) [2024-11-21T03:42:35.252Z] Copying: 390/1024 [MB] (12 MBps) [2024-11-21T03:42:36.638Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-21T03:42:37.580Z] Copying: 420/1024 [MB] (19 MBps) [2024-11-21T03:42:38.523Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-21T03:42:39.524Z] Copying: 445/1024 [MB] (13 MBps) [2024-11-21T03:42:40.472Z] Copying: 466/1024 [MB] (21 MBps) [2024-11-21T03:42:41.414Z] Copying: 480/1024 [MB] (14 MBps) [2024-11-21T03:42:42.359Z] Copying: 499/1024 [MB] (18 MBps) [2024-11-21T03:42:43.302Z] Copying: 525/1024 [MB] (25 MBps) [2024-11-21T03:42:44.689Z] Copying: 546/1024 [MB] (21 MBps) [2024-11-21T03:42:45.261Z] Copying: 567/1024 [MB] (20 MBps) [2024-11-21T03:42:46.645Z] Copying: 588/1024 [MB] (20 MBps) [2024-11-21T03:42:47.588Z] Copying: 606/1024 [MB] (18 MBps) [2024-11-21T03:42:48.530Z] Copying: 628/1024 [MB] (21 MBps) [2024-11-21T03:42:49.473Z] Copying: 651/1024 [MB] (23 MBps) [2024-11-21T03:42:50.415Z] Copying: 669/1024 [MB] (17 MBps) [2024-11-21T03:42:51.358Z] Copying: 688/1024 [MB] (18 MBps) [2024-11-21T03:42:52.301Z] Copying: 700/1024 [MB] (12 MBps) [2024-11-21T03:42:53.688Z] Copying: 723/1024 [MB] (22 MBps) [2024-11-21T03:42:54.261Z] Copying: 744/1024 [MB] (21 MBps) [2024-11-21T03:42:55.648Z] Copying: 762/1024 [MB] (18 MBps) [2024-11-21T03:42:56.592Z] Copying: 780/1024 [MB] (17 MBps) [2024-11-21T03:42:57.536Z] Copying: 798/1024 [MB] (17 MBps) [2024-11-21T03:42:58.479Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-21T03:42:59.427Z] Copying: 825/1024 [MB] (16 MBps) [2024-11-21T03:43:00.367Z] Copying: 845/1024 [MB] (19 MBps) [2024-11-21T03:43:01.311Z] Copying: 860/1024 [MB] (15 MBps) [2024-11-21T03:43:02.255Z] Copying: 878/1024 [MB] (17 MBps) [2024-11-21T03:43:03.642Z] Copying: 892/1024 [MB] (14 MBps) [2024-11-21T03:43:04.585Z] Copying: 912/1024 [MB] (19 MBps) [2024-11-21T03:43:05.530Z] Copying: 932/1024 [MB] (20 MBps) [2024-11-21T03:43:06.474Z] Copying: 958/1024 [MB] (26 MBps) [2024-11-21T03:43:07.418Z] Copying: 974/1024 [MB] (16 MBps) [2024-11-21T03:43:08.363Z] Copying: 987/1024 [MB] (12 MBps) [2024-11-21T03:43:09.307Z] Copying: 1002/1024 [MB] (15 MBps) [2024-11-21T03:43:09.570Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 03:43:09.362709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.005 [2024-11-21 03:43:09.363113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:22.005 [2024-11-21 03:43:09.364091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:22.005 [2024-11-21 03:43:09.364124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.005 [2024-11-21 03:43:09.364178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:22.005 [2024-11-21 03:43:09.365102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.005 [2024-11-21 03:43:09.365130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:22.005 [2024-11-21 03:43:09.365146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:34:22.005 [2024-11-21 03:43:09.365169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.005 [2024-11-21 03:43:09.365449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.005 [2024-11-21 03:43:09.365461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:22.005 [2024-11-21 03:43:09.365471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:34:22.005 [2024-11-21 03:43:09.365485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.005 [2024-11-21 03:43:09.365520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.005 [2024-11-21 03:43:09.365530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:22.005 [2024-11-21 03:43:09.365540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:22.005 [2024-11-21 03:43:09.365549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.005 [2024-11-21 03:43:09.365617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.005 [2024-11-21 03:43:09.365631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:22.005 [2024-11-21 03:43:09.365641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:22.005 [2024-11-21 03:43:09.365650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.005 [2024-11-21 03:43:09.365666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:22.005 [2024-11-21 03:43:09.365681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:22.005 [2024-11-21 03:43:09.365696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.365999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:22.005 [2024-11-21 03:43:09.366498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:22.006 [2024-11-21 03:43:09.366629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:22.006 [2024-11-21 03:43:09.366639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb65078-7682-42ba-84f4-487ad2767c85 00:34:22.006 [2024-11-21 03:43:09.366649] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:22.006 [2024-11-21 03:43:09.366658] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:34:22.006 [2024-11-21 03:43:09.366667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:34:22.006 [2024-11-21 03:43:09.366676] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:34:22.006 [2024-11-21 03:43:09.366688] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:22.006 [2024-11-21 03:43:09.366697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:22.006 [2024-11-21 03:43:09.366708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:22.006 [2024-11-21 03:43:09.366716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:22.006 [2024-11-21 03:43:09.366724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:22.006 [2024-11-21 03:43:09.366732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.006 [2024-11-21 03:43:09.366741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:22.006 [2024-11-21 03:43:09.366750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:34:22.006 [2024-11-21 03:43:09.366758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.369627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.006 [2024-11-21 03:43:09.369785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:22.006 [2024-11-21 03:43:09.369862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:34:22.006 [2024-11-21 03:43:09.369892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.370062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.006 [2024-11-21 03:43:09.370157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:22.006 [2024-11-21 03:43:09.370189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:34:22.006 [2024-11-21 03:43:09.370213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.378263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.378430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:22.006 [2024-11-21 03:43:09.378496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.378524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.378619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.378647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:22.006 [2024-11-21 03:43:09.378672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.378696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.378819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.378863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:22.006 [2024-11-21 03:43:09.378890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.379027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.379067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.379095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:22.006 [2024-11-21 03:43:09.379121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.379140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.392951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.393128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:22.006 [2024-11-21 03:43:09.393183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.393206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.404128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.404301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:22.006 [2024-11-21 03:43:09.404353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.404385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.404450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.404473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:22.006 [2024-11-21 03:43:09.404498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.404517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.404565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.404586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:22.006 [2024-11-21 03:43:09.404607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.404666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.404744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.404806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:22.006 [2024-11-21 03:43:09.404830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.404877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.405026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.405056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:22.006 [2024-11-21 03:43:09.405113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.405135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.405189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.405211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:22.006 [2024-11-21 03:43:09.405239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.405270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.405329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.006 [2024-11-21 03:43:09.405477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:22.006 [2024-11-21 03:43:09.405500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.006 [2024-11-21 03:43:09.405519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.006 [2024-11-21 03:43:09.405666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.932 ms, result 0 00:34:22.268 00:34:22.268 00:34:22.268 03:43:09 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:23.734 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:23.734 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:23.734 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:23.734 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:23.996 Process with pid 96156 is not found 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96156 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96156 ']' 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96156 00:34:23.996 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96156) - No such process 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96156 is not found' 00:34:23.996 Remove shared memory files 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_band_md /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_l2p_l1 /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_l2p_l2 /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_l2p_l2_ctx /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_nvc_md /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_p2l_pool /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_sb /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_sb_shm /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_trim_bitmap /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_trim_log /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_trim_md /dev/hugepages/ftl_9bb65078-7682-42ba-84f4-487ad2767c85_vmap 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:23.996 ************************************ 00:34:23.996 END TEST ftl_restore_fast 00:34:23.996 00:34:23.996 real 4m14.355s 00:34:23.996 user 4m2.074s 00:34:23.996 sys 0m12.020s 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.996 03:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:23.996 ************************************ 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@14 -- # killprocess 87806 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@954 -- # '[' -z 87806 ']' 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@958 -- # kill -0 87806 00:34:23.996 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87806) - No such process 00:34:23.996 Process with pid 87806 is not found 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87806 is not found' 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98774 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98774 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@835 -- # '[' -z 98774 ']' 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.996 03:43:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:23.996 03:43:11 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:23.996 [2024-11-21 03:43:11.474843] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:34:23.996 [2024-11-21 03:43:11.474979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98774 ] 00:34:24.257 [2024-11-21 03:43:11.610452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:24.257 [2024-11-21 03:43:11.641243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.257 [2024-11-21 03:43:11.669678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.829 03:43:12 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.829 03:43:12 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:24.829 03:43:12 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:25.089 nvme0n1 00:34:25.089 03:43:12 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:25.089 03:43:12 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:25.089 03:43:12 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:25.351 03:43:12 ftl -- ftl/common.sh@28 -- # stores=77e0680d-904c-4df4-bc95-bce8bcffd296 00:34:25.351 03:43:12 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:25.351 03:43:12 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77e0680d-904c-4df4-bc95-bce8bcffd296 00:34:25.612 03:43:13 ftl -- ftl/ftl.sh@23 -- # killprocess 98774 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@954 -- # '[' -z 98774 ']' 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@958 -- # kill -0 98774 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@959 -- # uname 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98774 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.612 killing process with pid 98774 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98774' 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@973 -- # kill 98774 00:34:25.612 03:43:13 ftl -- common/autotest_common.sh@978 -- # wait 98774 00:34:25.873 03:43:13 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:26.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.134 Waiting for block devices as requested 00:34:26.134 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.395 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.395 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.395 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:31.693 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:31.694 Remove shared memory files 00:34:31.694 03:43:18 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:31.694 03:43:18 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:31.694 03:43:18 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:31.694 03:43:19 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:31.694 03:43:19 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:31.694 03:43:19 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:31.694 03:43:19 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:31.694 00:34:31.694 real 16m30.917s 00:34:31.694 user 18m18.216s 00:34:31.694 sys 1m25.987s 00:34:31.694 03:43:19 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.694 ************************************ 00:34:31.694 END TEST ftl 00:34:31.694 ************************************ 00:34:31.694 03:43:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:31.694 03:43:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:31.694 03:43:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:31.694 03:43:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:31.694 03:43:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:31.694 03:43:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:31.694 03:43:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:31.694 03:43:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:31.694 03:43:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:31.694 03:43:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:31.694 03:43:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:31.694 03:43:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.694 03:43:19 -- common/autotest_common.sh@10 -- # set +x 00:34:31.694 03:43:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:31.694 03:43:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:31.694 03:43:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:31.694 03:43:19 -- common/autotest_common.sh@10 -- # set +x 00:34:33.083 INFO: APP EXITING 00:34:33.083 INFO: killing all VMs 00:34:33.083 INFO: killing vhost app 00:34:33.083 INFO: EXIT DONE 00:34:33.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:33.605 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:33.605 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:33.605 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:33.605 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:34.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:34.438 Cleaning 00:34:34.438 Removing: /var/run/dpdk/spdk0/config 00:34:34.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:34.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:34.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:34.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:34.438 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:34.438 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:34.438 Removing: /var/run/dpdk/spdk0 00:34:34.438 Removing: /var/run/dpdk/spdk_pid70753 00:34:34.438 Removing: /var/run/dpdk/spdk_pid70917 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71118 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71200 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71229 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71335 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71353 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71530 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71603 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71688 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71783 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71863 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71897 00:34:34.438 Removing: /var/run/dpdk/spdk_pid71934 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72004 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72099 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72519 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72566 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72613 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72629 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72687 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72703 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72761 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72777 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72819 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72837 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72879 00:34:34.438 Removing: /var/run/dpdk/spdk_pid72897 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73024 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73055 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73144 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73305 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73367 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73398 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73808 00:34:34.438 Removing: /var/run/dpdk/spdk_pid73901 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74006 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74047 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74068 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74147 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74760 00:34:34.438 Removing: /var/run/dpdk/spdk_pid74787 00:34:34.438 Removing: /var/run/dpdk/spdk_pid75232 00:34:34.438 Removing: /var/run/dpdk/spdk_pid75325 00:34:34.700 Removing: /var/run/dpdk/spdk_pid75423 00:34:34.701 Removing: /var/run/dpdk/spdk_pid75464 00:34:34.701 Removing: /var/run/dpdk/spdk_pid75485 00:34:34.701 Removing: /var/run/dpdk/spdk_pid75505 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77354 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77469 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77473 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77496 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77536 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77540 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77552 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77598 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77602 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77614 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77659 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77663 00:34:34.701 Removing: /var/run/dpdk/spdk_pid77675 00:34:34.701 Removing: /var/run/dpdk/spdk_pid79060 00:34:34.701 Removing: /var/run/dpdk/spdk_pid79146 00:34:34.701 Removing: /var/run/dpdk/spdk_pid80542 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82290 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82342 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82406 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82510 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82591 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82681 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82733 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82803 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82901 00:34:34.701 Removing: /var/run/dpdk/spdk_pid82983 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83073 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83131 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83195 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83288 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83374 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83463 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83522 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83586 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83685 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83765 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83850 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83908 00:34:34.701 Removing: /var/run/dpdk/spdk_pid83972 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84035 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84108 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84208 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84282 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84371 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84429 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84492 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84568 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84633 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84731 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84805 00:34:34.701 Removing: /var/run/dpdk/spdk_pid84949 00:34:34.701 Removing: /var/run/dpdk/spdk_pid85211 00:34:34.701 Removing: /var/run/dpdk/spdk_pid85242 00:34:34.701 Removing: /var/run/dpdk/spdk_pid85687 00:34:34.701 Removing: /var/run/dpdk/spdk_pid85861 00:34:34.701 Removing: /var/run/dpdk/spdk_pid85954 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86054 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86089 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86116 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86410 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86448 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86493 00:34:34.701 Removing: /var/run/dpdk/spdk_pid86857 00:34:34.701 Removing: /var/run/dpdk/spdk_pid87003 00:34:34.701 Removing: /var/run/dpdk/spdk_pid87806 00:34:34.701 Removing: /var/run/dpdk/spdk_pid87916 00:34:34.701 Removing: /var/run/dpdk/spdk_pid88070 00:34:34.701 Removing: /var/run/dpdk/spdk_pid88151 00:34:34.701 Removing: /var/run/dpdk/spdk_pid88431 00:34:34.701 Removing: /var/run/dpdk/spdk_pid88690 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89042 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89207 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89348 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89384 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89572 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89586 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89622 00:34:34.701 Removing: /var/run/dpdk/spdk_pid89858 00:34:34.701 Removing: /var/run/dpdk/spdk_pid90083 00:34:34.701 Removing: /var/run/dpdk/spdk_pid90699 00:34:34.701 Removing: /var/run/dpdk/spdk_pid91444 00:34:34.701 Removing: /var/run/dpdk/spdk_pid91982 00:34:34.701 Removing: /var/run/dpdk/spdk_pid92554 00:34:34.701 Removing: /var/run/dpdk/spdk_pid92697 00:34:34.701 Removing: /var/run/dpdk/spdk_pid92783 00:34:34.701 Removing: /var/run/dpdk/spdk_pid93340 00:34:34.701 Removing: /var/run/dpdk/spdk_pid93393 00:34:34.701 Removing: /var/run/dpdk/spdk_pid94032 00:34:34.701 Removing: /var/run/dpdk/spdk_pid94483 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95236 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95354 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95384 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95437 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95493 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95540 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95744 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95816 00:34:34.701 Removing: /var/run/dpdk/spdk_pid95876 00:34:34.963 Removing: /var/run/dpdk/spdk_pid95932 00:34:34.963 Removing: /var/run/dpdk/spdk_pid95967 00:34:34.963 Removing: /var/run/dpdk/spdk_pid96023 00:34:34.963 Removing: /var/run/dpdk/spdk_pid96156 00:34:34.963 Removing: /var/run/dpdk/spdk_pid96367 00:34:34.963 Removing: /var/run/dpdk/spdk_pid96870 00:34:34.963 Removing: /var/run/dpdk/spdk_pid97528 00:34:34.963 Removing: /var/run/dpdk/spdk_pid98133 00:34:34.963 Removing: /var/run/dpdk/spdk_pid98774 00:34:34.963 Clean 00:34:34.963 03:43:22 -- common/autotest_common.sh@1453 -- # return 0 00:34:34.963 03:43:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:34.963 03:43:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.963 03:43:22 -- common/autotest_common.sh@10 -- # set +x 00:34:34.963 03:43:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:34.963 03:43:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.963 03:43:22 -- common/autotest_common.sh@10 -- # set +x 00:34:34.963 03:43:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:34.963 03:43:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:34.963 03:43:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:34.963 03:43:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:34.963 03:43:22 -- spdk/autotest.sh@398 -- # hostname 00:34:34.963 03:43:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:35.225 geninfo: WARNING: invalid characters removed from testname! 00:35:01.808 03:43:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.724 03:43:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:06.271 03:43:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:08.816 03:43:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:10.731 03:43:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.646 03:43:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:15.193 03:44:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:15.193 03:44:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:15.193 03:44:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:15.193 03:44:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:15.193 03:44:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:15.193 03:44:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:15.193 + [[ -n 5754 ]] 00:35:15.193 + sudo kill 5754 00:35:15.204 [Pipeline] } 00:35:15.219 [Pipeline] // timeout 00:35:15.224 [Pipeline] } 00:35:15.238 [Pipeline] // stage 00:35:15.243 [Pipeline] } 00:35:15.256 [Pipeline] // catchError 00:35:15.264 [Pipeline] stage 00:35:15.266 [Pipeline] { (Stop VM) 00:35:15.278 [Pipeline] sh 00:35:15.561 + vagrant halt 00:35:18.102 ==> default: Halting domain... 00:35:23.508 [Pipeline] sh 00:35:23.796 + vagrant destroy -f 00:35:26.341 ==> default: Removing domain... 00:35:26.929 [Pipeline] sh 00:35:27.213 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:27.223 [Pipeline] } 00:35:27.242 [Pipeline] // stage 00:35:27.247 [Pipeline] } 00:35:27.264 [Pipeline] // dir 00:35:27.269 [Pipeline] } 00:35:27.286 [Pipeline] // wrap 00:35:27.293 [Pipeline] } 00:35:27.308 [Pipeline] // catchError 00:35:27.318 [Pipeline] stage 00:35:27.320 [Pipeline] { (Epilogue) 00:35:27.335 [Pipeline] sh 00:35:27.624 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:32.927 [Pipeline] catchError 00:35:32.930 [Pipeline] { 00:35:32.943 [Pipeline] sh 00:35:33.229 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:33.229 Artifacts sizes are good 00:35:33.240 [Pipeline] } 00:35:33.254 [Pipeline] // catchError 00:35:33.266 [Pipeline] archiveArtifacts 00:35:33.273 Archiving artifacts 00:35:33.385 [Pipeline] cleanWs 00:35:33.398 [WS-CLEANUP] Deleting project workspace... 00:35:33.398 [WS-CLEANUP] Deferred wipeout is used... 00:35:33.406 [WS-CLEANUP] done 00:35:33.408 [Pipeline] } 00:35:33.423 [Pipeline] // stage 00:35:33.429 [Pipeline] } 00:35:33.441 [Pipeline] // node 00:35:33.447 [Pipeline] End of Pipeline 00:35:33.478 Finished: SUCCESS