00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1017 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3679 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.197 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.197 The recommended git tool is: git 00:00:00.198 using credential 00000000-0000-0000-0000-000000000002 00:00:00.199 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.217 Fetching changes from the remote Git repository 00:00:00.220 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.240 Using shallow fetch with depth 1 00:00:00.240 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.240 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.099 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.108 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.119 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.119 > git config core.sparsecheckout # timeout=10 00:00:08.129 > git read-tree -mu HEAD # timeout=10 00:00:08.143 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.169 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.169 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.255 [Pipeline] Start of Pipeline 00:00:08.268 [Pipeline] library 00:00:08.270 Loading library shm_lib@master 00:00:08.271 Library shm_lib@master is cached. Copying from home. 00:00:08.288 [Pipeline] node 00:00:08.299 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.301 [Pipeline] { 00:00:08.313 [Pipeline] catchError 00:00:08.315 [Pipeline] { 00:00:08.329 [Pipeline] wrap 00:00:08.337 [Pipeline] { 00:00:08.343 [Pipeline] stage 00:00:08.344 [Pipeline] { (Prologue) 00:00:08.361 [Pipeline] echo 00:00:08.363 Node: VM-host-SM38 00:00:08.368 [Pipeline] cleanWs 00:00:08.376 [WS-CLEANUP] Deleting project workspace... 00:00:08.376 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.383 [WS-CLEANUP] done 00:00:08.552 [Pipeline] setCustomBuildProperty 00:00:08.616 [Pipeline] httpRequest 00:00:08.932 [Pipeline] echo 00:00:08.933 Sorcerer 10.211.164.20 is alive 00:00:08.941 [Pipeline] retry 00:00:08.943 [Pipeline] { 00:00:08.955 [Pipeline] httpRequest 00:00:08.960 HttpMethod: GET 00:00:08.960 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.960 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.971 Response Code: HTTP/1.1 200 OK 00:00:08.971 Success: Status code 200 is in the accepted range: 200,404 00:00:08.972 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.462 [Pipeline] } 00:00:12.478 [Pipeline] // retry 00:00:12.484 [Pipeline] sh 00:00:12.765 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.781 [Pipeline] httpRequest 00:00:13.360 [Pipeline] echo 00:00:13.363 Sorcerer 10.211.164.20 is alive 00:00:13.371 [Pipeline] retry 00:00:13.373 [Pipeline] { 00:00:13.384 [Pipeline] httpRequest 00:00:13.388 HttpMethod: GET 00:00:13.388 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.389 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.396 Response Code: HTTP/1.1 200 OK 00:00:13.396 Success: Status code 200 is in the accepted range: 200,404 00:00:13.396 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:09.384 [Pipeline] } 00:01:09.402 [Pipeline] // retry 00:01:09.410 [Pipeline] sh 00:01:09.700 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:12.257 [Pipeline] sh 00:01:12.542 + git -C spdk log --oneline -n5 00:01:12.542 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:12.542 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:12.542 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:12.542 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:12.542 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:12.565 [Pipeline] withCredentials 00:01:12.577 > git --version # timeout=10 00:01:12.590 > git --version # 'git version 2.39.2' 00:01:12.610 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:12.612 [Pipeline] { 00:01:12.621 [Pipeline] retry 00:01:12.623 [Pipeline] { 00:01:12.638 [Pipeline] sh 00:01:12.924 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:12.938 [Pipeline] } 00:01:12.956 [Pipeline] // retry 00:01:12.961 [Pipeline] } 00:01:12.977 [Pipeline] // withCredentials 00:01:12.987 [Pipeline] httpRequest 00:01:13.457 [Pipeline] echo 00:01:13.459 Sorcerer 10.211.164.20 is alive 00:01:13.470 [Pipeline] retry 00:01:13.473 [Pipeline] { 00:01:13.490 [Pipeline] httpRequest 00:01:13.496 HttpMethod: GET 00:01:13.497 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.497 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.499 Response Code: HTTP/1.1 200 OK 00:01:13.500 Success: Status code 200 is in the accepted range: 200,404 00:01:13.500 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.157 [Pipeline] } 00:01:22.174 [Pipeline] // retry 00:01:22.182 [Pipeline] sh 00:01:22.464 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.388 [Pipeline] sh 00:01:24.675 + git -C dpdk log --oneline -n5 00:01:24.675 eeb0605f11 version: 23.11.0 00:01:24.675 238778122a doc: update release notes for 23.11 00:01:24.675 46aa6b3cfc doc: fix description of RSS features 00:01:24.675 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.675 7e421ae345 devtools: support skipping forbid rule check 00:01:24.695 [Pipeline] writeFile 00:01:24.710 [Pipeline] sh 00:01:25.008 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.023 [Pipeline] sh 00:01:25.310 + cat autorun-spdk.conf 00:01:25.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.310 SPDK_TEST_NVME=1 00:01:25.310 SPDK_TEST_FTL=1 00:01:25.310 SPDK_TEST_ISAL=1 00:01:25.310 SPDK_RUN_ASAN=1 00:01:25.310 SPDK_RUN_UBSAN=1 00:01:25.310 SPDK_TEST_XNVME=1 00:01:25.310 SPDK_TEST_NVME_FDP=1 00:01:25.310 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.310 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.310 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.320 RUN_NIGHTLY=1 00:01:25.322 [Pipeline] } 00:01:25.336 [Pipeline] // stage 00:01:25.351 [Pipeline] stage 00:01:25.355 [Pipeline] { (Run VM) 00:01:25.386 [Pipeline] sh 00:01:25.684 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.685 + echo 'Start stage prepare_nvme.sh' 00:01:25.685 Start stage prepare_nvme.sh 00:01:25.685 + [[ -n 9 ]] 00:01:25.685 + disk_prefix=ex9 00:01:25.685 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:25.685 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:25.685 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:25.685 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.685 ++ SPDK_TEST_NVME=1 00:01:25.685 ++ SPDK_TEST_FTL=1 00:01:25.685 ++ SPDK_TEST_ISAL=1 00:01:25.685 ++ SPDK_RUN_ASAN=1 00:01:25.685 ++ SPDK_RUN_UBSAN=1 00:01:25.685 ++ SPDK_TEST_XNVME=1 00:01:25.685 ++ SPDK_TEST_NVME_FDP=1 00:01:25.685 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.685 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.685 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.685 ++ RUN_NIGHTLY=1 00:01:25.685 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:25.685 + nvme_files=() 00:01:25.685 + declare -A nvme_files 00:01:25.685 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.685 + nvme_files['nvme.img']=5G 00:01:25.685 + nvme_files['nvme-cmb.img']=5G 00:01:25.685 + nvme_files['nvme-multi0.img']=4G 00:01:25.685 + nvme_files['nvme-multi1.img']=4G 00:01:25.685 + nvme_files['nvme-multi2.img']=4G 00:01:25.685 + nvme_files['nvme-openstack.img']=8G 00:01:25.685 + nvme_files['nvme-zns.img']=5G 00:01:25.685 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.685 + (( SPDK_TEST_FTL == 1 )) 00:01:25.685 + nvme_files["nvme-ftl.img"]=6G 00:01:25.685 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.685 + nvme_files["nvme-fdp.img"]=1G 00:01:25.685 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.685 + for nvme in "${!nvme_files[@]}" 00:01:25.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:25.685 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.685 + for nvme in "${!nvme_files[@]}" 00:01:25.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:25.946 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:25.946 + for nvme in "${!nvme_files[@]}" 00:01:25.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:25.946 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.946 + for nvme in "${!nvme_files[@]}" 00:01:25.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:25.946 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.946 + for nvme in "${!nvme_files[@]}" 00:01:25.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:25.946 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.206 + for nvme in "${!nvme_files[@]}" 00:01:26.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:26.206 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.206 + for nvme in "${!nvme_files[@]}" 00:01:26.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:26.467 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.467 + for nvme in "${!nvme_files[@]}" 00:01:26.467 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:26.467 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:26.467 + for nvme in "${!nvme_files[@]}" 00:01:26.467 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:26.729 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.729 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:26.729 + echo 'End stage prepare_nvme.sh' 00:01:26.729 End stage prepare_nvme.sh 00:01:26.743 [Pipeline] sh 00:01:27.029 + DISTRO=fedora39 00:01:27.029 + CPUS=10 00:01:27.029 + RAM=12288 00:01:27.029 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:27.029 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:27.029 00:01:27.029 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:27.029 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:27.029 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:27.029 HELP=0 00:01:27.029 DRY_RUN=0 00:01:27.029 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:27.029 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:27.029 NVME_AUTO_CREATE=0 00:01:27.029 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:27.029 NVME_CMB=,,,, 00:01:27.029 NVME_PMR=,,,, 00:01:27.029 NVME_ZNS=,,,, 00:01:27.029 NVME_MS=true,,,, 00:01:27.029 NVME_FDP=,,,on, 00:01:27.029 SPDK_VAGRANT_DISTRO=fedora39 00:01:27.029 SPDK_VAGRANT_VMCPU=10 00:01:27.029 SPDK_VAGRANT_VMRAM=12288 00:01:27.029 SPDK_VAGRANT_PROVIDER=libvirt 00:01:27.029 SPDK_VAGRANT_HTTP_PROXY= 00:01:27.029 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:27.029 SPDK_OPENSTACK_NETWORK=0 00:01:27.029 VAGRANT_PACKAGE_BOX=0 00:01:27.029 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:27.029 FORCE_DISTRO=true 00:01:27.029 VAGRANT_BOX_VERSION= 00:01:27.029 EXTRA_VAGRANTFILES= 00:01:27.029 NIC_MODEL=e1000 00:01:27.029 00:01:27.029 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:27.029 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:29.577 Bringing machine 'default' up with 'libvirt' provider... 00:01:29.839 ==> default: Creating image (snapshot of base box volume). 00:01:30.101 ==> default: Creating domain with the following settings... 00:01:30.101 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732904089_2c8a2b1ec1c2af8e16b2 00:01:30.101 ==> default: -- Domain type: kvm 00:01:30.101 ==> default: -- Cpus: 10 00:01:30.101 ==> default: -- Feature: acpi 00:01:30.101 ==> default: -- Feature: apic 00:01:30.101 ==> default: -- Feature: pae 00:01:30.101 ==> default: -- Memory: 12288M 00:01:30.102 ==> default: -- Memory Backing: hugepages: 00:01:30.102 ==> default: -- Management MAC: 00:01:30.102 ==> default: -- Loader: 00:01:30.102 ==> default: -- Nvram: 00:01:30.102 ==> default: -- Base box: spdk/fedora39 00:01:30.102 ==> default: -- Storage pool: default 00:01:30.102 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732904089_2c8a2b1ec1c2af8e16b2.img (20G) 00:01:30.102 ==> default: -- Volume Cache: default 00:01:30.102 ==> default: -- Kernel: 00:01:30.102 ==> default: -- Initrd: 00:01:30.102 ==> default: -- Graphics Type: vnc 00:01:30.102 ==> default: -- Graphics Port: -1 00:01:30.102 ==> default: -- Graphics IP: 127.0.0.1 00:01:30.102 ==> default: -- Graphics Password: Not defined 00:01:30.102 ==> default: -- Video Type: cirrus 00:01:30.102 ==> default: -- Video VRAM: 9216 00:01:30.102 ==> default: -- Sound Type: 00:01:30.102 ==> default: -- Keymap: en-us 00:01:30.102 ==> default: -- TPM Path: 00:01:30.102 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:30.102 ==> default: -- Command line args: 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:30.102 ==> default: -> value=-drive, 00:01:30.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:30.102 ==> default: -> value=-device, 00:01:30.102 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.102 ==> default: Creating shared folders metadata... 00:01:30.102 ==> default: Starting domain. 00:01:32.021 ==> default: Waiting for domain to get an IP address... 00:01:50.143 ==> default: Waiting for SSH to become available... 00:01:50.143 ==> default: Configuring and enabling network interfaces... 00:01:52.690 default: SSH address: 192.168.121.198:22 00:01:52.690 default: SSH username: vagrant 00:01:52.690 default: SSH auth method: private key 00:01:54.607 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.788 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:08.095 ==> default: Mounting SSHFS shared folder... 00:02:10.637 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:10.637 ==> default: Checking Mount.. 00:02:11.208 ==> default: Folder Successfully Mounted! 00:02:11.468 00:02:11.468 SUCCESS! 00:02:11.468 00:02:11.468 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:11.468 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.468 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:11.468 00:02:11.475 [Pipeline] } 00:02:11.489 [Pipeline] // stage 00:02:11.497 [Pipeline] dir 00:02:11.498 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:11.499 [Pipeline] { 00:02:11.510 [Pipeline] catchError 00:02:11.512 [Pipeline] { 00:02:11.522 [Pipeline] sh 00:02:11.802 + vagrant ssh-config --host vagrant 00:02:11.802 + sed -ne '/^Host/,$p' 00:02:11.802 + tee ssh_conf 00:02:14.395 Host vagrant 00:02:14.395 HostName 192.168.121.198 00:02:14.395 User vagrant 00:02:14.395 Port 22 00:02:14.395 UserKnownHostsFile /dev/null 00:02:14.395 StrictHostKeyChecking no 00:02:14.395 PasswordAuthentication no 00:02:14.395 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:14.395 IdentitiesOnly yes 00:02:14.395 LogLevel FATAL 00:02:14.395 ForwardAgent yes 00:02:14.395 ForwardX11 yes 00:02:14.395 00:02:14.409 [Pipeline] withEnv 00:02:14.411 [Pipeline] { 00:02:14.423 [Pipeline] sh 00:02:14.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:14.705 source /etc/os-release 00:02:14.705 [[ -e /image.version ]] && img=$(< /image.version) 00:02:14.705 # Minimal, systemd-like check. 00:02:14.705 if [[ -e /.dockerenv ]]; then 00:02:14.705 # Clear garbage from the node'\''s name: 00:02:14.705 # agt-er_autotest_547-896 -> autotest_547-896 00:02:14.705 # $HOSTNAME is the actual container id 00:02:14.705 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:14.705 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:14.705 # We can assume this is a mount from a host where container is running, 00:02:14.705 # so fetch its hostname to easily identify the target swarm worker. 00:02:14.705 container="$(< /etc/hostname) ($agent)" 00:02:14.705 else 00:02:14.705 # Fallback 00:02:14.705 container=$agent 00:02:14.705 fi 00:02:14.705 fi 00:02:14.705 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:14.705 ' 00:02:14.979 [Pipeline] } 00:02:14.991 [Pipeline] // withEnv 00:02:15.000 [Pipeline] setCustomBuildProperty 00:02:15.015 [Pipeline] stage 00:02:15.018 [Pipeline] { (Tests) 00:02:15.035 [Pipeline] sh 00:02:15.317 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.592 [Pipeline] sh 00:02:15.875 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.152 [Pipeline] timeout 00:02:16.152 Timeout set to expire in 50 min 00:02:16.154 [Pipeline] { 00:02:16.168 [Pipeline] sh 00:02:16.452 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:17.025 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:17.039 [Pipeline] sh 00:02:17.325 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:17.602 [Pipeline] sh 00:02:17.887 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.166 [Pipeline] sh 00:02:18.452 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:18.714 ++ readlink -f spdk_repo 00:02:18.714 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.714 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.714 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.714 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.714 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.714 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.714 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.714 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:18.714 + cd /home/vagrant/spdk_repo 00:02:18.714 + source /etc/os-release 00:02:18.714 ++ NAME='Fedora Linux' 00:02:18.714 ++ VERSION='39 (Cloud Edition)' 00:02:18.714 ++ ID=fedora 00:02:18.714 ++ VERSION_ID=39 00:02:18.714 ++ VERSION_CODENAME= 00:02:18.714 ++ PLATFORM_ID=platform:f39 00:02:18.714 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:18.714 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.714 ++ LOGO=fedora-logo-icon 00:02:18.714 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:18.714 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.714 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:18.714 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.714 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.714 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.714 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:18.714 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.714 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:18.714 ++ SUPPORT_END=2024-11-12 00:02:18.714 ++ VARIANT='Cloud Edition' 00:02:18.714 ++ VARIANT_ID=cloud 00:02:18.714 + uname -a 00:02:18.714 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:18.714 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:18.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.237 Hugepages 00:02:19.237 node hugesize free / total 00:02:19.237 node0 1048576kB 0 / 0 00:02:19.237 node0 2048kB 0 / 0 00:02:19.237 00:02:19.237 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.237 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.498 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.498 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:19.498 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:19.498 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:19.498 + rm -f /tmp/spdk-ld-path 00:02:19.498 + source autorun-spdk.conf 00:02:19.498 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.498 ++ SPDK_TEST_NVME=1 00:02:19.498 ++ SPDK_TEST_FTL=1 00:02:19.498 ++ SPDK_TEST_ISAL=1 00:02:19.498 ++ SPDK_RUN_ASAN=1 00:02:19.498 ++ SPDK_RUN_UBSAN=1 00:02:19.498 ++ SPDK_TEST_XNVME=1 00:02:19.498 ++ SPDK_TEST_NVME_FDP=1 00:02:19.498 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:19.498 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.498 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.498 ++ RUN_NIGHTLY=1 00:02:19.498 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.498 + [[ -n '' ]] 00:02:19.498 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.498 + for M in /var/spdk/build-*-manifest.txt 00:02:19.498 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.498 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.498 + for M in /var/spdk/build-*-manifest.txt 00:02:19.498 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.498 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.498 + for M in /var/spdk/build-*-manifest.txt 00:02:19.498 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.498 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.498 ++ uname 00:02:19.498 + [[ Linux == \L\i\n\u\x ]] 00:02:19.498 + sudo dmesg -T 00:02:19.498 + sudo dmesg --clear 00:02:19.498 + dmesg_pid=5770 00:02:19.498 + [[ Fedora Linux == FreeBSD ]] 00:02:19.498 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.498 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.498 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.498 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.498 + sudo dmesg -Tw 00:02:19.498 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.498 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.498 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.498 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.498 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.498 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.498 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.498 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.498 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.498 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.498 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.760 18:15:39 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:19.760 18:15:39 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.760 18:15:39 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:19.760 18:15:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:19.760 18:15:39 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.760 18:15:39 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:19.760 18:15:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.760 18:15:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:19.760 18:15:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.760 18:15:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.760 18:15:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.760 18:15:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.760 18:15:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.760 18:15:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.760 18:15:39 -- paths/export.sh@5 -- $ export PATH 00:02:19.760 18:15:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.760 18:15:39 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.760 18:15:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:19.760 18:15:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732904139.XXXXXX 00:02:19.760 18:15:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732904139.PYrsbA 00:02:19.760 18:15:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:19.760 18:15:39 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:19.760 18:15:39 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.760 18:15:39 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:19.760 18:15:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.760 18:15:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.760 18:15:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:19.760 18:15:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:19.760 18:15:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.760 18:15:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:19.760 18:15:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:19.760 18:15:39 -- pm/common@17 -- $ local monitor 00:02:19.760 18:15:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.760 18:15:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.760 18:15:39 -- pm/common@25 -- $ sleep 1 00:02:19.760 18:15:39 -- pm/common@21 -- $ date +%s 00:02:19.760 18:15:39 -- pm/common@21 -- $ date +%s 00:02:19.760 18:15:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732904139 00:02:19.760 18:15:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732904139 00:02:19.760 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732904139_collect-cpu-load.pm.log 00:02:19.760 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732904139_collect-vmstat.pm.log 00:02:20.703 18:15:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:20.703 18:15:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.704 18:15:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.704 18:15:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.704 18:15:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.704 Fri Nov 29 06:15:40 PM UTC 2024 00:02:20.704 18:15:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.704 v25.01-pre-276-g35cd3e84d 00:02:20.704 18:15:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:20.704 18:15:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:20.704 18:15:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.704 18:15:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.704 18:15:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.704 ************************************ 00:02:20.704 START TEST asan 00:02:20.704 ************************************ 00:02:20.704 using asan 00:02:20.704 18:15:40 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:20.704 ************************************ 00:02:20.704 END TEST asan 00:02:20.704 ************************************ 00:02:20.704 00:02:20.704 real 0m0.000s 00:02:20.704 user 0m0.000s 00:02:20.704 sys 0m0.000s 00:02:20.704 18:15:40 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:20.704 18:15:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.965 18:15:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.965 18:15:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.965 18:15:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.965 18:15:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.965 18:15:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.965 ************************************ 00:02:20.965 START TEST ubsan 00:02:20.965 ************************************ 00:02:20.965 using ubsan 00:02:20.965 18:15:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:20.965 00:02:20.965 real 0m0.000s 00:02:20.965 user 0m0.000s 00:02:20.965 sys 0m0.000s 00:02:20.965 18:15:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:20.965 18:15:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.965 ************************************ 00:02:20.965 END TEST ubsan 00:02:20.965 ************************************ 00:02:20.965 18:15:40 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:20.965 18:15:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:20.965 18:15:40 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:20.965 18:15:40 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:20.965 18:15:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.965 18:15:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.965 ************************************ 00:02:20.965 START TEST build_native_dpdk 00:02:20.965 ************************************ 00:02:20.965 18:15:40 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:20.965 eeb0605f11 version: 23.11.0 00:02:20.965 238778122a doc: update release notes for 23.11 00:02:20.965 46aa6b3cfc doc: fix description of RSS features 00:02:20.965 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:20.965 7e421ae345 devtools: support skipping forbid rule check 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:20.965 18:15:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:20.966 patching file config/rte_config.h 00:02:20.966 Hunk #1 succeeded at 60 (offset 1 line). 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:20.966 patching file lib/pcapng/rte_pcapng.c 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:20.966 18:15:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:20.966 18:15:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:26.260 The Meson build system 00:02:26.260 Version: 1.5.0 00:02:26.260 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:26.260 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:26.260 Build type: native build 00:02:26.260 Program cat found: YES (/usr/bin/cat) 00:02:26.260 Project name: DPDK 00:02:26.260 Project version: 23.11.0 00:02:26.260 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.260 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:26.260 Host machine cpu family: x86_64 00:02:26.260 Host machine cpu: x86_64 00:02:26.260 Message: ## Building in Developer Mode ## 00:02:26.260 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.260 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:26.260 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.260 Program python3 found: YES (/usr/bin/python3) 00:02:26.260 Program cat found: YES (/usr/bin/cat) 00:02:26.260 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:26.260 Compiler for C supports arguments -march=native: YES 00:02:26.260 Checking for size of "void *" : 8 00:02:26.260 Checking for size of "void *" : 8 (cached) 00:02:26.260 Library m found: YES 00:02:26.260 Library numa found: YES 00:02:26.260 Has header "numaif.h" : YES 00:02:26.260 Library fdt found: NO 00:02:26.260 Library execinfo found: NO 00:02:26.260 Has header "execinfo.h" : YES 00:02:26.260 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.260 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.260 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.260 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.260 Run-time dependency openssl found: YES 3.1.1 00:02:26.260 Run-time dependency libpcap found: YES 1.10.4 00:02:26.260 Has header "pcap.h" with dependency libpcap: YES 00:02:26.260 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.260 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.260 Compiler for C supports arguments -Wformat: YES 00:02:26.260 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.260 Compiler for C supports arguments -Wformat-security: NO 00:02:26.260 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.260 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.260 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.260 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.260 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.260 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.260 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.261 Compiler for C supports arguments -Wundef: YES 00:02:26.261 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.261 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.261 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.261 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.261 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.261 Program objdump found: YES (/usr/bin/objdump) 00:02:26.261 Compiler for C supports arguments -mavx512f: YES 00:02:26.261 Checking if "AVX512 checking" compiles: YES 00:02:26.261 Fetching value of define "__SSE4_2__" : 1 00:02:26.261 Fetching value of define "__AES__" : 1 00:02:26.261 Fetching value of define "__AVX__" : 1 00:02:26.261 Fetching value of define "__AVX2__" : 1 00:02:26.261 Fetching value of define "__AVX512BW__" : 1 00:02:26.261 Fetching value of define "__AVX512CD__" : 1 00:02:26.261 Fetching value of define "__AVX512DQ__" : 1 00:02:26.261 Fetching value of define "__AVX512F__" : 1 00:02:26.261 Fetching value of define "__AVX512VL__" : 1 00:02:26.261 Fetching value of define "__PCLMUL__" : 1 00:02:26.261 Fetching value of define "__RDRND__" : 1 00:02:26.261 Fetching value of define "__RDSEED__" : 1 00:02:26.261 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:26.261 Fetching value of define "__znver1__" : (undefined) 00:02:26.261 Fetching value of define "__znver2__" : (undefined) 00:02:26.261 Fetching value of define "__znver3__" : (undefined) 00:02:26.261 Fetching value of define "__znver4__" : (undefined) 00:02:26.261 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.261 Message: lib/log: Defining dependency "log" 00:02:26.261 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.261 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.261 Checking for function "getentropy" : NO 00:02:26.261 Message: lib/eal: Defining dependency "eal" 00:02:26.261 Message: lib/ring: Defining dependency "ring" 00:02:26.261 Message: lib/rcu: Defining dependency "rcu" 00:02:26.261 Message: lib/mempool: Defining dependency "mempool" 00:02:26.261 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.261 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.261 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:26.261 Compiler for C supports arguments -mpclmul: YES 00:02:26.261 Compiler for C supports arguments -maes: YES 00:02:26.261 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.261 Compiler for C supports arguments -mavx512bw: YES 00:02:26.261 Compiler for C supports arguments -mavx512dq: YES 00:02:26.261 Compiler for C supports arguments -mavx512vl: YES 00:02:26.261 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.261 Compiler for C supports arguments -mavx2: YES 00:02:26.261 Compiler for C supports arguments -mavx: YES 00:02:26.261 Message: lib/net: Defining dependency "net" 00:02:26.261 Message: lib/meter: Defining dependency "meter" 00:02:26.261 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.261 Message: lib/pci: Defining dependency "pci" 00:02:26.261 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.261 Message: lib/metrics: Defining dependency "metrics" 00:02:26.261 Message: lib/hash: Defining dependency "hash" 00:02:26.261 Message: lib/timer: Defining dependency "timer" 00:02:26.261 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.261 Message: lib/acl: Defining dependency "acl" 00:02:26.261 Message: lib/bbdev: Defining dependency "bbdev" 00:02:26.261 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:26.261 Run-time dependency libelf found: YES 0.191 00:02:26.261 Message: lib/bpf: Defining dependency "bpf" 00:02:26.261 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:26.261 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.261 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.261 Message: lib/distributor: Defining dependency "distributor" 00:02:26.261 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.261 Message: lib/efd: Defining dependency "efd" 00:02:26.261 Message: lib/eventdev: Defining dependency "eventdev" 00:02:26.261 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:26.261 Message: lib/gpudev: Defining dependency "gpudev" 00:02:26.261 Message: lib/gro: Defining dependency "gro" 00:02:26.261 Message: lib/gso: Defining dependency "gso" 00:02:26.261 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:26.261 Message: lib/jobstats: Defining dependency "jobstats" 00:02:26.261 Message: lib/latencystats: Defining dependency "latencystats" 00:02:26.261 Message: lib/lpm: Defining dependency "lpm" 00:02:26.261 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512IFMA__" : 1 00:02:26.261 Message: lib/member: Defining dependency "member" 00:02:26.261 Message: lib/pcapng: Defining dependency "pcapng" 00:02:26.261 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.261 Message: lib/power: Defining dependency "power" 00:02:26.261 Message: lib/rawdev: Defining dependency "rawdev" 00:02:26.261 Message: lib/regexdev: Defining dependency "regexdev" 00:02:26.261 Message: lib/mldev: Defining dependency "mldev" 00:02:26.261 Message: lib/rib: Defining dependency "rib" 00:02:26.261 Message: lib/reorder: Defining dependency "reorder" 00:02:26.261 Message: lib/sched: Defining dependency "sched" 00:02:26.261 Message: lib/security: Defining dependency "security" 00:02:26.261 Message: lib/stack: Defining dependency "stack" 00:02:26.261 Has header "linux/userfaultfd.h" : YES 00:02:26.261 Has header "linux/vduse.h" : YES 00:02:26.261 Message: lib/vhost: Defining dependency "vhost" 00:02:26.261 Message: lib/ipsec: Defining dependency "ipsec" 00:02:26.261 Message: lib/pdcp: Defining dependency "pdcp" 00:02:26.261 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.261 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.261 Message: lib/fib: Defining dependency "fib" 00:02:26.261 Message: lib/port: Defining dependency "port" 00:02:26.261 Message: lib/pdump: Defining dependency "pdump" 00:02:26.261 Message: lib/table: Defining dependency "table" 00:02:26.261 Message: lib/pipeline: Defining dependency "pipeline" 00:02:26.261 Message: lib/graph: Defining dependency "graph" 00:02:26.261 Message: lib/node: Defining dependency "node" 00:02:26.261 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.261 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.261 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.261 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.204 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.204 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.204 Compiler for C supports arguments -Wno-format: YES 00:02:27.204 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.204 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.204 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.204 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.204 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.204 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.204 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.204 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.204 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.204 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.204 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.204 Has header "sys/epoll.h" : YES 00:02:27.204 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.204 Configuring doxy-api-html.conf using configuration 00:02:27.204 Configuring doxy-api-man.conf using configuration 00:02:27.204 Program mandb found: YES (/usr/bin/mandb) 00:02:27.204 Program sphinx-build found: NO 00:02:27.204 Configuring rte_build_config.h using configuration 00:02:27.204 Message: 00:02:27.204 ================= 00:02:27.204 Applications Enabled 00:02:27.204 ================= 00:02:27.204 00:02:27.204 apps: 00:02:27.204 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:27.204 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:27.204 test-pmd, test-regex, test-sad, test-security-perf, 00:02:27.204 00:02:27.204 Message: 00:02:27.204 ================= 00:02:27.204 Libraries Enabled 00:02:27.204 ================= 00:02:27.204 00:02:27.204 libs: 00:02:27.204 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:27.204 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:27.204 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:27.204 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:27.204 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:27.204 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:27.204 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:27.204 00:02:27.204 00:02:27.204 Message: 00:02:27.204 =============== 00:02:27.204 Drivers Enabled 00:02:27.204 =============== 00:02:27.204 00:02:27.204 common: 00:02:27.204 00:02:27.204 bus: 00:02:27.204 pci, vdev, 00:02:27.204 mempool: 00:02:27.204 ring, 00:02:27.204 dma: 00:02:27.204 00:02:27.204 net: 00:02:27.204 i40e, 00:02:27.204 raw: 00:02:27.204 00:02:27.204 crypto: 00:02:27.204 00:02:27.204 compress: 00:02:27.204 00:02:27.204 regex: 00:02:27.204 00:02:27.204 ml: 00:02:27.204 00:02:27.204 vdpa: 00:02:27.204 00:02:27.204 event: 00:02:27.204 00:02:27.204 baseband: 00:02:27.204 00:02:27.204 gpu: 00:02:27.204 00:02:27.204 00:02:27.204 Message: 00:02:27.204 ================= 00:02:27.204 Content Skipped 00:02:27.204 ================= 00:02:27.204 00:02:27.204 apps: 00:02:27.204 00:02:27.204 libs: 00:02:27.204 00:02:27.204 drivers: 00:02:27.204 common/cpt: not in enabled drivers build config 00:02:27.204 common/dpaax: not in enabled drivers build config 00:02:27.204 common/iavf: not in enabled drivers build config 00:02:27.204 common/idpf: not in enabled drivers build config 00:02:27.204 common/mvep: not in enabled drivers build config 00:02:27.204 common/octeontx: not in enabled drivers build config 00:02:27.204 bus/auxiliary: not in enabled drivers build config 00:02:27.204 bus/cdx: not in enabled drivers build config 00:02:27.204 bus/dpaa: not in enabled drivers build config 00:02:27.204 bus/fslmc: not in enabled drivers build config 00:02:27.204 bus/ifpga: not in enabled drivers build config 00:02:27.204 bus/platform: not in enabled drivers build config 00:02:27.204 bus/vmbus: not in enabled drivers build config 00:02:27.204 common/cnxk: not in enabled drivers build config 00:02:27.204 common/mlx5: not in enabled drivers build config 00:02:27.204 common/nfp: not in enabled drivers build config 00:02:27.204 common/qat: not in enabled drivers build config 00:02:27.204 common/sfc_efx: not in enabled drivers build config 00:02:27.204 mempool/bucket: not in enabled drivers build config 00:02:27.204 mempool/cnxk: not in enabled drivers build config 00:02:27.204 mempool/dpaa: not in enabled drivers build config 00:02:27.204 mempool/dpaa2: not in enabled drivers build config 00:02:27.204 mempool/octeontx: not in enabled drivers build config 00:02:27.204 mempool/stack: not in enabled drivers build config 00:02:27.204 dma/cnxk: not in enabled drivers build config 00:02:27.204 dma/dpaa: not in enabled drivers build config 00:02:27.204 dma/dpaa2: not in enabled drivers build config 00:02:27.204 dma/hisilicon: not in enabled drivers build config 00:02:27.204 dma/idxd: not in enabled drivers build config 00:02:27.204 dma/ioat: not in enabled drivers build config 00:02:27.204 dma/skeleton: not in enabled drivers build config 00:02:27.204 net/af_packet: not in enabled drivers build config 00:02:27.204 net/af_xdp: not in enabled drivers build config 00:02:27.204 net/ark: not in enabled drivers build config 00:02:27.204 net/atlantic: not in enabled drivers build config 00:02:27.204 net/avp: not in enabled drivers build config 00:02:27.204 net/axgbe: not in enabled drivers build config 00:02:27.204 net/bnx2x: not in enabled drivers build config 00:02:27.204 net/bnxt: not in enabled drivers build config 00:02:27.204 net/bonding: not in enabled drivers build config 00:02:27.204 net/cnxk: not in enabled drivers build config 00:02:27.204 net/cpfl: not in enabled drivers build config 00:02:27.204 net/cxgbe: not in enabled drivers build config 00:02:27.204 net/dpaa: not in enabled drivers build config 00:02:27.204 net/dpaa2: not in enabled drivers build config 00:02:27.204 net/e1000: not in enabled drivers build config 00:02:27.204 net/ena: not in enabled drivers build config 00:02:27.204 net/enetc: not in enabled drivers build config 00:02:27.204 net/enetfec: not in enabled drivers build config 00:02:27.204 net/enic: not in enabled drivers build config 00:02:27.204 net/failsafe: not in enabled drivers build config 00:02:27.204 net/fm10k: not in enabled drivers build config 00:02:27.204 net/gve: not in enabled drivers build config 00:02:27.204 net/hinic: not in enabled drivers build config 00:02:27.204 net/hns3: not in enabled drivers build config 00:02:27.204 net/iavf: not in enabled drivers build config 00:02:27.204 net/ice: not in enabled drivers build config 00:02:27.204 net/idpf: not in enabled drivers build config 00:02:27.204 net/igc: not in enabled drivers build config 00:02:27.204 net/ionic: not in enabled drivers build config 00:02:27.204 net/ipn3ke: not in enabled drivers build config 00:02:27.204 net/ixgbe: not in enabled drivers build config 00:02:27.204 net/mana: not in enabled drivers build config 00:02:27.204 net/memif: not in enabled drivers build config 00:02:27.204 net/mlx4: not in enabled drivers build config 00:02:27.204 net/mlx5: not in enabled drivers build config 00:02:27.204 net/mvneta: not in enabled drivers build config 00:02:27.204 net/mvpp2: not in enabled drivers build config 00:02:27.204 net/netvsc: not in enabled drivers build config 00:02:27.204 net/nfb: not in enabled drivers build config 00:02:27.204 net/nfp: not in enabled drivers build config 00:02:27.204 net/ngbe: not in enabled drivers build config 00:02:27.204 net/null: not in enabled drivers build config 00:02:27.204 net/octeontx: not in enabled drivers build config 00:02:27.204 net/octeon_ep: not in enabled drivers build config 00:02:27.204 net/pcap: not in enabled drivers build config 00:02:27.204 net/pfe: not in enabled drivers build config 00:02:27.204 net/qede: not in enabled drivers build config 00:02:27.204 net/ring: not in enabled drivers build config 00:02:27.204 net/sfc: not in enabled drivers build config 00:02:27.204 net/softnic: not in enabled drivers build config 00:02:27.204 net/tap: not in enabled drivers build config 00:02:27.204 net/thunderx: not in enabled drivers build config 00:02:27.204 net/txgbe: not in enabled drivers build config 00:02:27.204 net/vdev_netvsc: not in enabled drivers build config 00:02:27.204 net/vhost: not in enabled drivers build config 00:02:27.204 net/virtio: not in enabled drivers build config 00:02:27.204 net/vmxnet3: not in enabled drivers build config 00:02:27.204 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.204 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.204 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.204 raw/ifpga: not in enabled drivers build config 00:02:27.204 raw/ntb: not in enabled drivers build config 00:02:27.204 raw/skeleton: not in enabled drivers build config 00:02:27.204 crypto/armv8: not in enabled drivers build config 00:02:27.204 crypto/bcmfs: not in enabled drivers build config 00:02:27.204 crypto/caam_jr: not in enabled drivers build config 00:02:27.204 crypto/ccp: not in enabled drivers build config 00:02:27.204 crypto/cnxk: not in enabled drivers build config 00:02:27.204 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.204 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.204 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.204 crypto/mlx5: not in enabled drivers build config 00:02:27.204 crypto/mvsam: not in enabled drivers build config 00:02:27.204 crypto/nitrox: not in enabled drivers build config 00:02:27.204 crypto/null: not in enabled drivers build config 00:02:27.204 crypto/octeontx: not in enabled drivers build config 00:02:27.204 crypto/openssl: not in enabled drivers build config 00:02:27.204 crypto/scheduler: not in enabled drivers build config 00:02:27.204 crypto/uadk: not in enabled drivers build config 00:02:27.204 crypto/virtio: not in enabled drivers build config 00:02:27.204 compress/isal: not in enabled drivers build config 00:02:27.204 compress/mlx5: not in enabled drivers build config 00:02:27.204 compress/octeontx: not in enabled drivers build config 00:02:27.204 compress/zlib: not in enabled drivers build config 00:02:27.204 regex/mlx5: not in enabled drivers build config 00:02:27.204 regex/cn9k: not in enabled drivers build config 00:02:27.204 ml/cnxk: not in enabled drivers build config 00:02:27.204 vdpa/ifc: not in enabled drivers build config 00:02:27.204 vdpa/mlx5: not in enabled drivers build config 00:02:27.204 vdpa/nfp: not in enabled drivers build config 00:02:27.204 vdpa/sfc: not in enabled drivers build config 00:02:27.204 event/cnxk: not in enabled drivers build config 00:02:27.204 event/dlb2: not in enabled drivers build config 00:02:27.204 event/dpaa: not in enabled drivers build config 00:02:27.204 event/dpaa2: not in enabled drivers build config 00:02:27.204 event/dsw: not in enabled drivers build config 00:02:27.204 event/opdl: not in enabled drivers build config 00:02:27.204 event/skeleton: not in enabled drivers build config 00:02:27.204 event/sw: not in enabled drivers build config 00:02:27.204 event/octeontx: not in enabled drivers build config 00:02:27.204 baseband/acc: not in enabled drivers build config 00:02:27.204 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.204 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.204 baseband/la12xx: not in enabled drivers build config 00:02:27.204 baseband/null: not in enabled drivers build config 00:02:27.205 baseband/turbo_sw: not in enabled drivers build config 00:02:27.205 gpu/cuda: not in enabled drivers build config 00:02:27.205 00:02:27.205 00:02:27.205 Build targets in project: 215 00:02:27.205 00:02:27.205 DPDK 23.11.0 00:02:27.205 00:02:27.205 User defined options 00:02:27.205 libdir : lib 00:02:27.205 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.205 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:27.205 c_link_args : 00:02:27.205 enable_docs : false 00:02:27.205 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:27.205 enable_kmods : false 00:02:27.205 machine : native 00:02:27.205 tests : false 00:02:27.205 00:02:27.205 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.205 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.465 18:15:47 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.465 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:27.465 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.465 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.465 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.465 [4/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.465 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.465 [6/705] Linking static target lib/librte_kvargs.a 00:02:27.726 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.726 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.726 [9/705] Linking static target lib/librte_log.a 00:02:27.726 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.726 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.726 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.726 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.986 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.986 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.986 [16/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.986 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.986 [18/705] Linking target lib/librte_log.so.24.0 00:02:28.245 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.245 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.245 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.245 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.245 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.245 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.245 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.506 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.506 [27/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.506 [28/705] Linking static target lib/librte_telemetry.a 00:02:28.506 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.506 [30/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.506 [31/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:28.506 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.506 [33/705] Linking target lib/librte_kvargs.so.24.0 00:02:28.768 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.768 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.768 [36/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:28.768 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.768 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.768 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.768 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.768 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.768 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.768 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:29.030 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.030 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:29.030 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.291 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.291 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.291 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.291 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.292 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.292 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.292 [53/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.292 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.292 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.292 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.551 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.551 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.551 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.551 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.551 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.551 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.551 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.551 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.812 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.812 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.812 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.812 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.812 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.074 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.074 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:30.074 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.074 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.074 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.074 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.074 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.074 [77/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.074 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.335 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.335 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.335 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.335 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.335 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.335 [84/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.335 [85/705] Linking static target lib/librte_ring.a 00:02:30.595 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.595 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.595 [88/705] Linking static target lib/librte_eal.a 00:02:30.595 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.595 [90/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.595 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.857 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.857 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.857 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.857 [95/705] Linking static target lib/librte_mempool.a 00:02:30.857 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.118 [97/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.118 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.118 [99/705] Linking static target lib/librte_rcu.a 00:02:31.118 [100/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.118 [101/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.118 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.118 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.379 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.379 [105/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:31.379 [106/705] Linking static target lib/librte_net.a 00:02:31.379 [107/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.379 [108/705] Linking static target lib/librte_mbuf.a 00:02:31.379 [109/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.379 [110/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.379 [111/705] Linking static target lib/librte_meter.a 00:02:31.379 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.641 [113/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.641 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.641 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.641 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.641 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.641 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.902 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.164 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.164 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.164 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.425 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.425 [124/705] Linking static target lib/librte_pci.a 00:02:32.425 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.425 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.425 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.425 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.425 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.425 [130/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.425 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.425 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.425 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.425 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.686 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.686 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.686 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.686 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.686 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.686 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.686 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.686 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.686 [143/705] Linking static target lib/librte_cmdline.a 00:02:32.686 [144/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.947 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.947 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:32.947 [147/705] Linking static target lib/librte_metrics.a 00:02:32.947 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.208 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.208 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.208 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.208 [152/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.470 [153/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.470 [154/705] Linking static target lib/librte_timer.a 00:02:33.470 [155/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:33.470 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.731 [157/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:33.732 [158/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.732 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:33.993 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:33.993 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:33.993 [162/705] Linking static target lib/librte_bitratestats.a 00:02:34.254 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.254 [164/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:34.254 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:34.254 [166/705] Linking static target lib/librte_bbdev.a 00:02:34.514 [167/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:34.514 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:34.775 [169/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:34.775 [170/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:34.775 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:34.775 [172/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.775 [173/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.775 [174/705] Linking static target lib/librte_hash.a 00:02:34.775 [175/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:34.775 [176/705] Linking static target lib/acl/libavx2_tmp.a 00:02:35.072 [177/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.072 [178/705] Linking target lib/librte_eal.so.24.0 00:02:35.072 [179/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:35.072 [180/705] Linking static target lib/librte_cfgfile.a 00:02:35.072 [181/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:35.072 [182/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:35.072 [183/705] Linking target lib/librte_ring.so.24.0 00:02:35.072 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:35.072 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:35.072 [186/705] Linking target lib/librte_meter.so.24.0 00:02:35.072 [187/705] Linking target lib/librte_pci.so.24.0 00:02:35.371 [188/705] Linking target lib/librte_timer.so.24.0 00:02:35.371 [189/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:35.371 [190/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.371 [191/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:35.371 [192/705] Linking target lib/librte_rcu.so.24.0 00:02:35.371 [193/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:35.371 [194/705] Linking target lib/librte_mempool.so.24.0 00:02:35.371 [195/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:35.371 [196/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.371 [197/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:35.371 [198/705] Linking static target lib/librte_ethdev.a 00:02:35.371 [199/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:35.371 [200/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:35.371 [201/705] Linking target lib/librte_mbuf.so.24.0 00:02:35.371 [202/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.371 [203/705] Linking target lib/librte_cfgfile.so.24.0 00:02:35.371 [204/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:35.634 [205/705] Linking target lib/librte_net.so.24.0 00:02:35.634 [206/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.634 [207/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:35.634 [208/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.634 [209/705] Linking target lib/librte_bbdev.so.24.0 00:02:35.634 [210/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:35.634 [211/705] Linking static target lib/librte_bpf.a 00:02:35.634 [212/705] Linking target lib/librte_hash.so.24.0 00:02:35.634 [213/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.634 [214/705] Linking target lib/librte_cmdline.so.24.0 00:02:35.634 [215/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.634 [216/705] Linking static target lib/librte_compressdev.a 00:02:35.634 [217/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:35.895 [218/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:35.895 [219/705] Linking static target lib/librte_acl.a 00:02:35.895 [220/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.895 [221/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.895 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:35.895 [223/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.156 [224/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.156 [225/705] Linking target lib/librte_acl.so.24.0 00:02:36.156 [226/705] Linking target lib/librte_compressdev.so.24.0 00:02:36.156 [227/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:36.156 [228/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:36.156 [229/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:36.156 [230/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:36.156 [231/705] Linking static target lib/librte_distributor.a 00:02:36.416 [232/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.416 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:36.416 [234/705] Linking static target lib/librte_dmadev.a 00:02:36.416 [235/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.416 [236/705] Linking target lib/librte_distributor.so.24.0 00:02:36.676 [237/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.676 [238/705] Linking target lib/librte_dmadev.so.24.0 00:02:36.676 [239/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:36.676 [240/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:36.936 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:36.936 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:36.936 [243/705] Linking static target lib/librte_efd.a 00:02:36.936 [244/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:36.936 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:36.936 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.936 [247/705] Linking target lib/librte_efd.so.24.0 00:02:37.197 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:37.197 [249/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:37.197 [250/705] Linking static target lib/librte_dispatcher.a 00:02:37.459 [251/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:37.459 [252/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.459 [253/705] Linking static target lib/librte_cryptodev.a 00:02:37.459 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:37.459 [255/705] Linking static target lib/librte_gpudev.a 00:02:37.459 [256/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:37.459 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.720 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:37.720 [259/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:37.720 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:37.980 [261/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:37.980 [262/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:37.980 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:37.980 [264/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:37.980 [265/705] Linking static target lib/librte_eventdev.a 00:02:37.980 [266/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:37.980 [267/705] Linking static target lib/librte_gro.a 00:02:37.980 [268/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.240 [269/705] Linking target lib/librte_gpudev.so.24.0 00:02:38.240 [270/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:38.240 [271/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.240 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:38.240 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:38.240 [274/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:38.500 [275/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.500 [276/705] Linking target lib/librte_cryptodev.so.24.0 00:02:38.500 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:38.500 [278/705] Linking static target lib/librte_gso.a 00:02:38.500 [279/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:38.500 [280/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.500 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:38.760 [282/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:38.760 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:38.760 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:38.760 [285/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:38.760 [286/705] Linking static target lib/librte_jobstats.a 00:02:38.760 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:38.760 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:38.760 [289/705] Linking static target lib/librte_ip_frag.a 00:02:39.020 [290/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.020 [291/705] Linking target lib/librte_ethdev.so.24.0 00:02:39.021 [292/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.021 [293/705] Linking target lib/librte_jobstats.so.24.0 00:02:39.021 [294/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:39.021 [295/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:39.021 [296/705] Linking static target lib/librte_latencystats.a 00:02:39.021 [297/705] Linking target lib/librte_metrics.so.24.0 00:02:39.021 [298/705] Linking target lib/librte_bpf.so.24.0 00:02:39.021 [299/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.021 [300/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:39.021 [301/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:39.282 [302/705] Linking target lib/librte_gro.so.24.0 00:02:39.282 [303/705] Linking target lib/librte_gso.so.24.0 00:02:39.282 [304/705] Linking target lib/librte_ip_frag.so.24.0 00:02:39.282 [305/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:39.282 [306/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:39.282 [307/705] Linking target lib/librte_bitratestats.so.24.0 00:02:39.282 [308/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:39.282 [309/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.282 [310/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.282 [311/705] Linking target lib/librte_latencystats.so.24.0 00:02:39.542 [312/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:39.542 [313/705] Linking static target lib/librte_lpm.a 00:02:39.542 [314/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.542 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:39.542 [316/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.542 [317/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.542 [318/705] Linking target lib/librte_eventdev.so.24.0 00:02:39.542 [319/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.542 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:39.542 [321/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.804 [322/705] Linking target lib/librte_lpm.so.24.0 00:02:39.804 [323/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.804 [324/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:39.804 [325/705] Linking target lib/librte_dispatcher.so.24.0 00:02:39.804 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:39.804 [327/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:39.804 [328/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:39.804 [329/705] Linking static target lib/librte_pcapng.a 00:02:39.804 [330/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.804 [331/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.066 [332/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:40.066 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.066 [334/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.066 [335/705] Linking target lib/librte_pcapng.so.24.0 00:02:40.066 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.066 [337/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:40.066 [338/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:40.066 [339/705] Linking static target lib/librte_regexdev.a 00:02:40.066 [340/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:40.066 [341/705] Linking static target lib/librte_rawdev.a 00:02:40.066 [342/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.066 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:40.326 [344/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:40.326 [345/705] Linking static target lib/librte_power.a 00:02:40.326 [346/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:40.326 [347/705] Linking static target lib/librte_member.a 00:02:40.326 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:40.326 [349/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.326 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:40.585 [351/705] Linking target lib/librte_member.so.24.0 00:02:40.585 [352/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.585 [353/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:40.585 [354/705] Linking static target lib/librte_mldev.a 00:02:40.585 [355/705] Linking target lib/librte_rawdev.so.24.0 00:02:40.585 [356/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:40.585 [357/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:40.585 [358/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.585 [359/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.585 [360/705] Linking static target lib/librte_reorder.a 00:02:40.585 [361/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.585 [362/705] Linking target lib/librte_power.so.24.0 00:02:40.585 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:40.585 [364/705] Linking target lib/librte_regexdev.so.24.0 00:02:40.845 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:40.845 [366/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:40.845 [367/705] Linking static target lib/librte_rib.a 00:02:40.845 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.845 [369/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.845 [370/705] Linking target lib/librte_reorder.so.24.0 00:02:40.845 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:40.845 [372/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:40.845 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:40.845 [374/705] Linking static target lib/librte_stack.a 00:02:40.845 [375/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:41.105 [376/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.105 [377/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.105 [378/705] Linking static target lib/librte_security.a 00:02:41.105 [379/705] Linking target lib/librte_stack.so.24.0 00:02:41.105 [380/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.105 [381/705] Linking target lib/librte_rib.so.24.0 00:02:41.105 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:41.105 [383/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:41.365 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:41.365 [385/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.365 [386/705] Linking target lib/librte_security.so.24.0 00:02:41.365 [387/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.365 [388/705] Linking target lib/librte_mldev.so.24.0 00:02:41.365 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:41.365 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:41.624 [391/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.624 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.885 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.885 [394/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:41.885 [395/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:41.885 [396/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:41.885 [397/705] Linking static target lib/librte_sched.a 00:02:42.146 [398/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:42.146 [399/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.146 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:42.146 [401/705] Linking target lib/librte_sched.so.24.0 00:02:42.146 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:42.146 [403/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:42.146 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:42.405 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:42.405 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:42.405 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:42.666 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.666 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:42.666 [410/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:42.666 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:42.926 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:42.926 [413/705] Linking static target lib/librte_ipsec.a 00:02:42.926 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:42.926 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:43.186 [416/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:43.187 [417/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.187 [418/705] Linking target lib/librte_ipsec.so.24.0 00:02:43.187 [419/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:43.187 [420/705] Linking static target lib/librte_fib.a 00:02:43.187 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:43.187 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:43.187 [423/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:43.447 [424/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.447 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:43.447 [426/705] Linking target lib/librte_fib.so.24.0 00:02:43.447 [427/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:43.447 [428/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:43.447 [429/705] Linking static target lib/librte_pdcp.a 00:02:43.447 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:43.708 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.708 [432/705] Linking target lib/librte_pdcp.so.24.0 00:02:43.708 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.969 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.969 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:43.969 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.969 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.969 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:44.230 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:44.230 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:44.230 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:44.230 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:44.230 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:44.490 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:44.490 [445/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.490 [446/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.490 [447/705] Linking static target lib/librte_pdump.a 00:02:44.490 [448/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:44.490 [449/705] Linking static target lib/librte_port.a 00:02:44.490 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.747 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:44.747 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.747 [453/705] Linking target lib/librte_pdump.so.24.0 00:02:44.747 [454/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.005 [455/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.005 [456/705] Linking target lib/librte_port.so.24.0 00:02:45.005 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:45.005 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:45.005 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:45.005 [460/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:45.005 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:45.005 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:45.268 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:45.268 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:45.268 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:45.268 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:45.268 [467/705] Linking static target lib/librte_table.a 00:02:45.527 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:45.527 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:45.784 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.784 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:45.784 [472/705] Linking target lib/librte_table.so.24.0 00:02:45.784 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:45.784 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:45.784 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:46.041 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:46.041 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:46.041 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:46.041 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:46.041 [480/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:46.298 [481/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:46.557 [482/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:46.557 [483/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:46.557 [484/705] Linking static target lib/librte_graph.a 00:02:46.557 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:46.557 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:46.557 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:46.557 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:46.816 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:46.816 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.816 [491/705] Linking target lib/librte_graph.so.24.0 00:02:47.074 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:47.075 [493/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:47.075 [494/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:47.075 [495/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:47.075 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:47.075 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:47.333 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:47.333 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:47.333 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:47.333 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:47.333 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:47.591 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:47.591 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:47.591 [505/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:47.591 [506/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:47.591 [507/705] Linking static target lib/librte_node.a 00:02:47.591 [508/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:47.591 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:47.591 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:47.849 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.849 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:47.849 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.849 [514/705] Linking target lib/librte_node.so.24.0 00:02:47.849 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.849 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.108 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.108 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.108 [519/705] Linking static target drivers/librte_bus_pci.a 00:02:48.108 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.108 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.108 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.108 [523/705] Linking static target drivers/librte_bus_vdev.a 00:02:48.108 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:48.108 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.108 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:48.108 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:48.366 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.366 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:48.366 [530/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.366 [531/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:48.366 [532/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.366 [533/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.366 [534/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:48.366 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:48.367 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:48.367 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:48.367 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.367 [539/705] Linking static target drivers/librte_mempool_ring.a 00:02:48.625 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.625 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:48.625 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:48.884 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:49.141 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:49.141 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:49.398 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:49.656 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:49.656 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:49.656 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:49.656 [550/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:49.656 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:49.656 [552/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:49.915 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:50.173 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:50.173 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:50.173 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:50.173 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:50.431 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:50.431 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:50.431 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:50.431 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:50.688 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:50.688 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:50.946 [564/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:50.946 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:50.946 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:50.946 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:50.946 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:50.946 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:50.946 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:51.205 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:51.205 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:51.205 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:51.463 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:51.463 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:51.463 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:51.463 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:51.463 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:51.463 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:51.721 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:51.721 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:51.721 [582/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:51.721 [583/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.721 [584/705] Linking static target drivers/librte_net_i40e.a 00:02:51.980 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:51.980 [586/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.980 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:51.980 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:52.239 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:52.239 [590/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.239 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:52.239 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:52.239 [593/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:52.239 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:52.497 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:52.497 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:52.497 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:52.772 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:52.772 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:52.772 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:52.772 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:52.772 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:53.030 [603/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.030 [604/705] Linking static target lib/librte_vhost.a 00:02:53.030 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:53.030 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:53.030 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:53.288 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:53.288 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:53.288 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:53.289 [611/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:53.289 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:53.289 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:53.547 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:53.807 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:53.807 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:53.807 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.807 [618/705] Linking target lib/librte_vhost.so.24.0 00:02:54.068 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:54.330 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:54.330 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:54.330 [622/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:54.330 [623/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:54.592 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:54.592 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:54.592 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:54.592 [627/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:54.592 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:54.592 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:54.853 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:54.853 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:54.853 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:54.853 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:54.853 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:54.853 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:55.114 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:55.114 [637/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:55.114 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:55.114 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:55.114 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:55.114 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.386 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.386 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.386 [644/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:55.386 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.386 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.650 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.650 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:55.650 [649/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:55.650 [650/705] Linking static target lib/librte_pipeline.a 00:02:55.650 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.908 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:55.908 [653/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:55.908 [654/705] Linking target app/dpdk-dumpcap 00:02:55.908 [655/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.166 [656/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.166 [657/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:56.166 [658/705] Linking target app/dpdk-graph 00:02:56.166 [659/705] Linking target app/dpdk-pdump 00:02:56.166 [660/705] Linking target app/dpdk-proc-info 00:02:56.166 [661/705] Linking target app/dpdk-test-acl 00:02:56.423 [662/705] Linking target app/dpdk-test-cmdline 00:02:56.423 [663/705] Linking target app/dpdk-test-compress-perf 00:02:56.423 [664/705] Linking target app/dpdk-test-crypto-perf 00:02:56.423 [665/705] Linking target app/dpdk-test-dma-perf 00:02:56.681 [666/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:56.681 [667/705] Linking target app/dpdk-test-flow-perf 00:02:56.681 [668/705] Linking target app/dpdk-test-gpudev 00:02:56.681 [669/705] Linking target app/dpdk-test-eventdev 00:02:56.681 [670/705] Linking target app/dpdk-test-fib 00:02:56.681 [671/705] Linking target app/dpdk-test-mldev 00:02:56.681 [672/705] Linking target app/dpdk-test-pipeline 00:02:56.939 [673/705] Linking target app/dpdk-test-bbdev 00:02:56.939 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.939 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:56.939 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.199 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:57.199 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.199 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.199 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.457 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.457 [682/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.457 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.457 [684/705] Linking target lib/librte_pipeline.so.24.0 00:02:57.457 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:57.715 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:57.715 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:57.973 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.973 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:57.973 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:57.973 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:58.231 [692/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:58.231 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.231 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.490 [695/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.490 [696/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:58.490 [697/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.749 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:58.749 [699/705] Linking target app/dpdk-test-sad 00:02:58.749 [700/705] Linking target app/dpdk-test-regex 00:02:58.749 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.749 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.749 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:59.314 [704/705] Linking target app/dpdk-test-security-perf 00:02:59.314 [705/705] Linking target app/dpdk-testpmd 00:02:59.314 18:16:19 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:59.314 18:16:19 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:59.314 18:16:19 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:59.314 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.314 [0/1] Installing files. 00:02:59.574 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.575 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.576 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.576 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.576 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.577 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.837 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.837 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.837 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.837 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.837 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.840 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:59.840 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:59.840 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:59.840 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:59.840 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:59.840 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:59.840 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:59.840 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:59.840 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:59.840 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:59.840 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:59.840 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:59.840 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:59.840 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:59.840 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:59.840 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:59.840 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:59.840 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:59.840 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:59.840 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:59.840 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:59.840 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:59.840 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:59.840 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:59.840 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:59.840 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:59.840 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:59.840 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:59.840 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:59.840 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:59.840 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:59.840 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:59.840 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:59.840 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:59.840 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:59.841 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:59.841 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:59.841 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:59.841 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:59.841 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:59.841 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:59.841 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:59.841 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:59.841 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:59.841 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:59.841 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:59.841 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:59.841 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:59.841 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:59.841 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:59.841 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:59.841 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:59.841 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:59.841 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:59.841 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:59.841 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:59.841 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:59.841 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:59.841 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:59.841 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:59.841 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:59.841 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:59.841 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:59.841 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:59.841 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:59.841 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:59.841 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:59.841 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:59.841 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:59.841 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:59.841 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:59.841 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:59.841 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:59.841 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:59.841 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:59.841 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:59.841 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:59.841 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:59.841 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:59.841 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:59.841 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:59.841 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:59.841 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:59.841 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:59.841 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:59.841 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:59.841 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:59.841 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:59.841 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:59.841 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:59.841 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:59.841 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:59.841 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:59.841 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:59.841 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:59.841 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:59.841 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:59.841 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:59.841 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:59.841 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:59.841 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:59.841 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:59.841 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:59.841 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:59.841 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:59.841 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:59.841 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:59.841 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:59.841 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:59.841 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:59.841 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:59.841 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:59.841 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:59.841 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:59.841 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:59.841 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:59.841 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:59.841 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:59.841 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:59.841 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:59.841 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:59.841 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:59.841 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:59.841 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:59.841 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:59.841 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:59.841 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:59.841 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:59.841 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:59.841 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:59.842 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:59.842 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:59.842 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:00.099 18:16:19 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:00.099 18:16:19 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.099 00:03:00.099 real 0m39.066s 00:03:00.099 user 4m30.658s 00:03:00.099 sys 0m38.771s 00:03:00.099 18:16:19 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:00.099 ************************************ 00:03:00.099 END TEST build_native_dpdk 00:03:00.099 ************************************ 00:03:00.099 18:16:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:00.099 18:16:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.099 18:16:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.099 18:16:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:00.099 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:00.099 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.099 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:00.099 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:00.356 Using 'verbs' RDMA provider 00:03:11.748 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:21.714 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:21.714 Creating mk/config.mk...done. 00:03:21.714 Creating mk/cc.flags.mk...done. 00:03:21.714 Type 'make' to build. 00:03:21.714 18:16:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:21.714 18:16:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:21.714 18:16:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:21.714 18:16:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.714 ************************************ 00:03:21.715 START TEST make 00:03:21.715 ************************************ 00:03:21.715 18:16:41 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:21.973 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:21.973 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:21.973 meson setup builddir \ 00:03:21.973 -Dwith-libaio=enabled \ 00:03:21.973 -Dwith-liburing=enabled \ 00:03:21.973 -Dwith-libvfn=disabled \ 00:03:21.973 -Dwith-spdk=disabled \ 00:03:21.973 -Dexamples=false \ 00:03:21.973 -Dtests=false \ 00:03:21.973 -Dtools=false && \ 00:03:21.973 meson compile -C builddir && \ 00:03:21.973 cd -) 00:03:21.973 make[1]: Nothing to be done for 'all'. 00:03:23.874 The Meson build system 00:03:23.874 Version: 1.5.0 00:03:23.874 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:23.874 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:23.874 Build type: native build 00:03:23.874 Project name: xnvme 00:03:23.874 Project version: 0.7.5 00:03:23.874 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:23.874 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:23.874 Host machine cpu family: x86_64 00:03:23.874 Host machine cpu: x86_64 00:03:23.874 Message: host_machine.system: linux 00:03:23.874 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:23.874 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:23.874 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:23.874 Run-time dependency threads found: YES 00:03:23.874 Has header "setupapi.h" : NO 00:03:23.874 Has header "linux/blkzoned.h" : YES 00:03:23.874 Has header "linux/blkzoned.h" : YES (cached) 00:03:23.874 Has header "libaio.h" : YES 00:03:23.874 Library aio found: YES 00:03:23.874 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:23.874 Run-time dependency liburing found: YES 2.2 00:03:23.874 Dependency libvfn skipped: feature with-libvfn disabled 00:03:23.874 Found CMake: /usr/bin/cmake (3.27.7) 00:03:23.874 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:23.874 Subproject spdk : skipped: feature with-spdk disabled 00:03:23.874 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.874 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.874 Library rt found: YES 00:03:23.874 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:23.874 Configuring xnvme_config.h using configuration 00:03:23.874 Configuring xnvme.spec using configuration 00:03:23.874 Run-time dependency bash-completion found: YES 2.11 00:03:23.874 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:23.874 Program cp found: YES (/usr/bin/cp) 00:03:23.874 Build targets in project: 3 00:03:23.874 00:03:23.874 xnvme 0.7.5 00:03:23.874 00:03:23.874 Subprojects 00:03:23.874 spdk : NO Feature 'with-spdk' disabled 00:03:23.874 00:03:23.874 User defined options 00:03:23.874 examples : false 00:03:23.874 tests : false 00:03:23.874 tools : false 00:03:23.874 with-libaio : enabled 00:03:23.874 with-liburing: enabled 00:03:23.874 with-libvfn : disabled 00:03:23.874 with-spdk : disabled 00:03:23.874 00:03:23.874 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.133 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:24.133 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:24.392 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:24.392 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:24.392 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:24.392 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:24.392 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:24.392 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:24.392 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:24.392 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:24.392 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:24.392 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:24.392 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:24.392 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:24.392 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:24.392 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:24.392 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:24.392 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:24.392 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:24.392 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:24.392 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:24.392 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:24.392 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:24.392 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:24.392 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:24.392 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:24.392 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:24.651 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:24.651 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:24.651 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:24.651 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:24.651 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:24.651 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:24.651 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:24.651 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:24.652 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:24.652 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:24.652 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:24.652 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:24.652 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:24.652 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:24.652 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:24.652 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:24.652 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:24.652 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:24.652 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:24.652 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:24.652 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:24.652 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:24.652 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:24.652 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:24.652 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:24.652 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:24.652 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:24.652 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:24.652 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:24.652 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:24.652 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:24.652 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:24.652 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:24.652 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:24.652 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:24.910 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:24.910 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:24.910 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:24.910 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:24.910 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:24.910 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:24.910 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:24.910 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:24.910 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:24.910 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:24.910 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:24.910 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:25.169 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:25.427 [75/76] Linking static target lib/libxnvme.a 00:03:25.427 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:25.427 INFO: autodetecting backend as ninja 00:03:25.427 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:25.427 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:57.547 CC lib/ut/ut.o 00:03:57.547 CC lib/ut_mock/mock.o 00:03:57.547 CC lib/log/log_flags.o 00:03:57.547 CC lib/log/log.o 00:03:57.547 CC lib/log/log_deprecated.o 00:03:57.547 LIB libspdk_ut.a 00:03:57.547 SO libspdk_ut.so.2.0 00:03:57.547 LIB libspdk_ut_mock.a 00:03:57.547 LIB libspdk_log.a 00:03:57.547 SO libspdk_ut_mock.so.6.0 00:03:57.547 SO libspdk_log.so.7.1 00:03:57.547 SYMLINK libspdk_ut.so 00:03:57.547 SYMLINK libspdk_ut_mock.so 00:03:57.547 SYMLINK libspdk_log.so 00:03:57.547 CC lib/util/base64.o 00:03:57.547 CC lib/util/bit_array.o 00:03:57.547 CC lib/util/cpuset.o 00:03:57.547 CC lib/util/crc16.o 00:03:57.547 CC lib/util/crc32.o 00:03:57.547 CC lib/util/crc32c.o 00:03:57.547 CC lib/dma/dma.o 00:03:57.547 CXX lib/trace_parser/trace.o 00:03:57.547 CC lib/ioat/ioat.o 00:03:57.547 CC lib/vfio_user/host/vfio_user_pci.o 00:03:57.547 CC lib/util/crc32_ieee.o 00:03:57.547 CC lib/util/crc64.o 00:03:57.547 CC lib/util/dif.o 00:03:57.547 LIB libspdk_dma.a 00:03:57.547 CC lib/util/fd.o 00:03:57.547 CC lib/util/fd_group.o 00:03:57.547 SO libspdk_dma.so.5.0 00:03:57.547 CC lib/util/file.o 00:03:57.547 SYMLINK libspdk_dma.so 00:03:57.547 CC lib/vfio_user/host/vfio_user.o 00:03:57.547 CC lib/util/hexlify.o 00:03:57.547 CC lib/util/iov.o 00:03:57.547 CC lib/util/math.o 00:03:57.806 LIB libspdk_ioat.a 00:03:57.806 CC lib/util/net.o 00:03:57.806 SO libspdk_ioat.so.7.0 00:03:57.806 CC lib/util/pipe.o 00:03:57.806 SYMLINK libspdk_ioat.so 00:03:57.806 CC lib/util/strerror_tls.o 00:03:57.806 CC lib/util/string.o 00:03:57.806 CC lib/util/uuid.o 00:03:57.806 CC lib/util/xor.o 00:03:57.806 CC lib/util/zipf.o 00:03:57.806 CC lib/util/md5.o 00:03:57.806 LIB libspdk_vfio_user.a 00:03:57.806 SO libspdk_vfio_user.so.5.0 00:03:57.806 SYMLINK libspdk_vfio_user.so 00:03:58.065 LIB libspdk_util.a 00:03:58.323 SO libspdk_util.so.10.1 00:03:58.323 LIB libspdk_trace_parser.a 00:03:58.323 SYMLINK libspdk_util.so 00:03:58.323 SO libspdk_trace_parser.so.6.0 00:03:58.582 SYMLINK libspdk_trace_parser.so 00:03:58.582 CC lib/env_dpdk/env.o 00:03:58.582 CC lib/env_dpdk/memory.o 00:03:58.582 CC lib/env_dpdk/pci.o 00:03:58.582 CC lib/env_dpdk/threads.o 00:03:58.582 CC lib/idxd/idxd.o 00:03:58.582 CC lib/env_dpdk/init.o 00:03:58.582 CC lib/conf/conf.o 00:03:58.582 CC lib/json/json_parse.o 00:03:58.582 CC lib/rdma_utils/rdma_utils.o 00:03:58.582 CC lib/vmd/vmd.o 00:03:58.582 CC lib/vmd/led.o 00:03:58.840 LIB libspdk_conf.a 00:03:58.840 CC lib/json/json_util.o 00:03:58.840 SO libspdk_conf.so.6.0 00:03:58.840 LIB libspdk_rdma_utils.a 00:03:58.840 SYMLINK libspdk_conf.so 00:03:58.840 CC lib/json/json_write.o 00:03:58.840 CC lib/idxd/idxd_user.o 00:03:58.840 CC lib/env_dpdk/pci_ioat.o 00:03:58.840 SO libspdk_rdma_utils.so.1.0 00:03:58.840 SYMLINK libspdk_rdma_utils.so 00:03:58.840 CC lib/env_dpdk/pci_virtio.o 00:03:58.840 CC lib/env_dpdk/pci_vmd.o 00:03:58.840 CC lib/env_dpdk/pci_idxd.o 00:03:58.840 CC lib/env_dpdk/pci_event.o 00:03:58.840 CC lib/env_dpdk/sigbus_handler.o 00:03:59.098 CC lib/env_dpdk/pci_dpdk.o 00:03:59.098 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:59.098 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:59.098 LIB libspdk_vmd.a 00:03:59.098 LIB libspdk_json.a 00:03:59.098 CC lib/idxd/idxd_kernel.o 00:03:59.098 SO libspdk_vmd.so.6.0 00:03:59.098 SO libspdk_json.so.6.0 00:03:59.098 CC lib/rdma_provider/common.o 00:03:59.098 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:59.098 SYMLINK libspdk_vmd.so 00:03:59.098 SYMLINK libspdk_json.so 00:03:59.098 LIB libspdk_idxd.a 00:03:59.357 SO libspdk_idxd.so.12.1 00:03:59.357 LIB libspdk_rdma_provider.a 00:03:59.357 SYMLINK libspdk_idxd.so 00:03:59.357 SO libspdk_rdma_provider.so.7.0 00:03:59.357 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:59.357 CC lib/jsonrpc/jsonrpc_client.o 00:03:59.357 CC lib/jsonrpc/jsonrpc_server.o 00:03:59.357 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:59.357 SYMLINK libspdk_rdma_provider.so 00:03:59.616 LIB libspdk_env_dpdk.a 00:03:59.616 LIB libspdk_jsonrpc.a 00:03:59.616 SO libspdk_jsonrpc.so.6.0 00:03:59.616 SO libspdk_env_dpdk.so.15.1 00:03:59.616 SYMLINK libspdk_jsonrpc.so 00:03:59.616 SYMLINK libspdk_env_dpdk.so 00:03:59.873 CC lib/rpc/rpc.o 00:04:00.131 LIB libspdk_rpc.a 00:04:00.131 SO libspdk_rpc.so.6.0 00:04:00.131 SYMLINK libspdk_rpc.so 00:04:00.390 CC lib/trace/trace.o 00:04:00.390 CC lib/trace/trace_flags.o 00:04:00.390 CC lib/trace/trace_rpc.o 00:04:00.390 CC lib/notify/notify_rpc.o 00:04:00.390 CC lib/notify/notify.o 00:04:00.390 CC lib/keyring/keyring_rpc.o 00:04:00.390 CC lib/keyring/keyring.o 00:04:00.648 LIB libspdk_notify.a 00:04:00.648 SO libspdk_notify.so.6.0 00:04:00.648 LIB libspdk_keyring.a 00:04:00.648 SYMLINK libspdk_notify.so 00:04:00.648 SO libspdk_keyring.so.2.0 00:04:00.648 LIB libspdk_trace.a 00:04:00.648 SO libspdk_trace.so.11.0 00:04:00.648 SYMLINK libspdk_keyring.so 00:04:00.648 SYMLINK libspdk_trace.so 00:04:00.906 CC lib/sock/sock.o 00:04:00.906 CC lib/sock/sock_rpc.o 00:04:00.906 CC lib/thread/iobuf.o 00:04:00.906 CC lib/thread/thread.o 00:04:01.473 LIB libspdk_sock.a 00:04:01.473 SO libspdk_sock.so.10.0 00:04:01.473 SYMLINK libspdk_sock.so 00:04:01.732 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:01.732 CC lib/nvme/nvme_ctrlr.o 00:04:01.732 CC lib/nvme/nvme_ns.o 00:04:01.732 CC lib/nvme/nvme_fabric.o 00:04:01.732 CC lib/nvme/nvme.o 00:04:01.732 CC lib/nvme/nvme_pcie.o 00:04:01.732 CC lib/nvme/nvme_ns_cmd.o 00:04:01.732 CC lib/nvme/nvme_qpair.o 00:04:01.732 CC lib/nvme/nvme_pcie_common.o 00:04:02.298 CC lib/nvme/nvme_quirks.o 00:04:02.298 CC lib/nvme/nvme_transport.o 00:04:02.298 CC lib/nvme/nvme_discovery.o 00:04:02.298 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.298 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:02.556 LIB libspdk_thread.a 00:04:02.556 CC lib/nvme/nvme_tcp.o 00:04:02.556 SO libspdk_thread.so.11.0 00:04:02.556 CC lib/nvme/nvme_opal.o 00:04:02.556 CC lib/nvme/nvme_io_msg.o 00:04:02.556 SYMLINK libspdk_thread.so 00:04:02.556 CC lib/nvme/nvme_poll_group.o 00:04:02.815 CC lib/nvme/nvme_zns.o 00:04:02.815 CC lib/nvme/nvme_stubs.o 00:04:02.815 CC lib/nvme/nvme_auth.o 00:04:03.073 CC lib/accel/accel.o 00:04:03.073 CC lib/nvme/nvme_cuse.o 00:04:03.073 CC lib/nvme/nvme_rdma.o 00:04:03.073 CC lib/blob/blobstore.o 00:04:03.332 CC lib/init/json_config.o 00:04:03.332 CC lib/virtio/virtio.o 00:04:03.332 CC lib/fsdev/fsdev.o 00:04:03.332 CC lib/init/subsystem.o 00:04:03.591 CC lib/init/subsystem_rpc.o 00:04:03.591 CC lib/blob/request.o 00:04:03.591 CC lib/blob/zeroes.o 00:04:03.591 CC lib/init/rpc.o 00:04:03.849 CC lib/virtio/virtio_vhost_user.o 00:04:03.849 CC lib/virtio/virtio_vfio_user.o 00:04:03.849 CC lib/virtio/virtio_pci.o 00:04:03.849 CC lib/blob/blob_bs_dev.o 00:04:03.849 LIB libspdk_init.a 00:04:03.849 CC lib/accel/accel_rpc.o 00:04:03.849 SO libspdk_init.so.6.0 00:04:04.108 CC lib/accel/accel_sw.o 00:04:04.108 SYMLINK libspdk_init.so 00:04:04.108 CC lib/fsdev/fsdev_io.o 00:04:04.108 CC lib/fsdev/fsdev_rpc.o 00:04:04.108 LIB libspdk_virtio.a 00:04:04.108 SO libspdk_virtio.so.7.0 00:04:04.108 CC lib/event/app.o 00:04:04.108 CC lib/event/log_rpc.o 00:04:04.108 CC lib/event/reactor.o 00:04:04.108 CC lib/event/app_rpc.o 00:04:04.108 CC lib/event/scheduler_static.o 00:04:04.108 LIB libspdk_accel.a 00:04:04.108 SYMLINK libspdk_virtio.so 00:04:04.366 LIB libspdk_fsdev.a 00:04:04.366 SO libspdk_accel.so.16.0 00:04:04.366 SO libspdk_fsdev.so.2.0 00:04:04.366 SYMLINK libspdk_accel.so 00:04:04.366 LIB libspdk_nvme.a 00:04:04.366 SYMLINK libspdk_fsdev.so 00:04:04.366 SO libspdk_nvme.so.15.0 00:04:04.624 CC lib/bdev/bdev_rpc.o 00:04:04.624 CC lib/bdev/scsi_nvme.o 00:04:04.624 CC lib/bdev/part.o 00:04:04.624 CC lib/bdev/bdev.o 00:04:04.624 CC lib/bdev/bdev_zone.o 00:04:04.624 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:04.624 LIB libspdk_event.a 00:04:04.624 SO libspdk_event.so.14.0 00:04:04.624 SYMLINK libspdk_nvme.so 00:04:04.624 SYMLINK libspdk_event.so 00:04:05.192 LIB libspdk_fuse_dispatcher.a 00:04:05.192 SO libspdk_fuse_dispatcher.so.1.0 00:04:05.192 SYMLINK libspdk_fuse_dispatcher.so 00:04:06.218 LIB libspdk_blob.a 00:04:06.218 SO libspdk_blob.so.12.0 00:04:06.481 SYMLINK libspdk_blob.so 00:04:06.739 CC lib/lvol/lvol.o 00:04:06.739 CC lib/blobfs/blobfs.o 00:04:06.739 CC lib/blobfs/tree.o 00:04:07.329 LIB libspdk_bdev.a 00:04:07.586 LIB libspdk_blobfs.a 00:04:07.586 SO libspdk_bdev.so.17.0 00:04:07.586 SO libspdk_blobfs.so.11.0 00:04:07.586 SYMLINK libspdk_blobfs.so 00:04:07.586 SYMLINK libspdk_bdev.so 00:04:07.586 LIB libspdk_lvol.a 00:04:07.586 SO libspdk_lvol.so.11.0 00:04:07.586 SYMLINK libspdk_lvol.so 00:04:07.842 CC lib/nbd/nbd.o 00:04:07.842 CC lib/nbd/nbd_rpc.o 00:04:07.842 CC lib/ublk/ublk.o 00:04:07.842 CC lib/scsi/lun.o 00:04:07.842 CC lib/scsi/dev.o 00:04:07.842 CC lib/scsi/port.o 00:04:07.842 CC lib/scsi/scsi.o 00:04:07.842 CC lib/nvmf/ctrlr.o 00:04:07.842 CC lib/ublk/ublk_rpc.o 00:04:07.842 CC lib/ftl/ftl_core.o 00:04:07.842 CC lib/ftl/ftl_init.o 00:04:07.842 CC lib/scsi/scsi_bdev.o 00:04:07.842 CC lib/ftl/ftl_layout.o 00:04:07.842 CC lib/ftl/ftl_debug.o 00:04:07.842 CC lib/ftl/ftl_io.o 00:04:08.100 CC lib/ftl/ftl_sb.o 00:04:08.100 CC lib/ftl/ftl_l2p.o 00:04:08.100 CC lib/ftl/ftl_l2p_flat.o 00:04:08.100 CC lib/nvmf/ctrlr_discovery.o 00:04:08.100 LIB libspdk_nbd.a 00:04:08.100 SO libspdk_nbd.so.7.0 00:04:08.100 CC lib/nvmf/ctrlr_bdev.o 00:04:08.100 CC lib/nvmf/subsystem.o 00:04:08.100 CC lib/ftl/ftl_nv_cache.o 00:04:08.100 SYMLINK libspdk_nbd.so 00:04:08.100 CC lib/ftl/ftl_band.o 00:04:08.357 CC lib/scsi/scsi_pr.o 00:04:08.357 LIB libspdk_ublk.a 00:04:08.357 SO libspdk_ublk.so.3.0 00:04:08.357 CC lib/nvmf/nvmf.o 00:04:08.357 SYMLINK libspdk_ublk.so 00:04:08.357 CC lib/nvmf/nvmf_rpc.o 00:04:08.357 CC lib/nvmf/transport.o 00:04:08.615 CC lib/scsi/scsi_rpc.o 00:04:08.615 CC lib/nvmf/tcp.o 00:04:08.615 CC lib/ftl/ftl_band_ops.o 00:04:08.615 CC lib/scsi/task.o 00:04:08.872 CC lib/nvmf/stubs.o 00:04:08.872 LIB libspdk_scsi.a 00:04:08.872 CC lib/ftl/ftl_writer.o 00:04:08.872 SO libspdk_scsi.so.9.0 00:04:09.130 SYMLINK libspdk_scsi.so 00:04:09.130 CC lib/ftl/ftl_rq.o 00:04:09.130 CC lib/nvmf/mdns_server.o 00:04:09.130 CC lib/nvmf/rdma.o 00:04:09.130 CC lib/nvmf/auth.o 00:04:09.130 CC lib/ftl/ftl_reloc.o 00:04:09.130 CC lib/ftl/ftl_l2p_cache.o 00:04:09.388 CC lib/ftl/ftl_p2l.o 00:04:09.388 CC lib/iscsi/conn.o 00:04:09.388 CC lib/vhost/vhost.o 00:04:09.388 CC lib/iscsi/init_grp.o 00:04:09.645 CC lib/ftl/ftl_p2l_log.o 00:04:09.645 CC lib/vhost/vhost_rpc.o 00:04:09.645 CC lib/ftl/mngt/ftl_mngt.o 00:04:09.645 CC lib/iscsi/iscsi.o 00:04:09.902 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:09.902 CC lib/vhost/vhost_scsi.o 00:04:09.902 CC lib/vhost/vhost_blk.o 00:04:09.902 CC lib/iscsi/param.o 00:04:09.902 CC lib/iscsi/portal_grp.o 00:04:09.902 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:10.159 CC lib/vhost/rte_vhost_user.o 00:04:10.159 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:10.159 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:10.159 CC lib/iscsi/tgt_node.o 00:04:10.159 CC lib/iscsi/iscsi_subsystem.o 00:04:10.159 CC lib/iscsi/iscsi_rpc.o 00:04:10.416 CC lib/iscsi/task.o 00:04:10.416 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:10.416 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.673 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.673 CC lib/ftl/utils/ftl_conf.o 00:04:10.674 CC lib/ftl/utils/ftl_md.o 00:04:10.674 LIB libspdk_vhost.a 00:04:10.930 CC lib/ftl/utils/ftl_mempool.o 00:04:10.930 CC lib/ftl/utils/ftl_bitmap.o 00:04:10.930 SO libspdk_vhost.so.8.0 00:04:10.930 CC lib/ftl/utils/ftl_property.o 00:04:10.930 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:10.930 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:10.930 SYMLINK libspdk_vhost.so 00:04:10.930 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:10.930 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:10.930 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:10.930 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:10.930 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:10.930 LIB libspdk_iscsi.a 00:04:11.188 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:11.188 SO libspdk_iscsi.so.8.0 00:04:11.188 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:11.188 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:11.188 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:11.188 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:11.188 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:11.188 CC lib/ftl/base/ftl_base_bdev.o 00:04:11.188 CC lib/ftl/base/ftl_base_dev.o 00:04:11.188 SYMLINK libspdk_iscsi.so 00:04:11.188 CC lib/ftl/ftl_trace.o 00:04:11.188 LIB libspdk_nvmf.a 00:04:11.446 SO libspdk_nvmf.so.20.0 00:04:11.446 LIB libspdk_ftl.a 00:04:11.446 SYMLINK libspdk_nvmf.so 00:04:11.446 SO libspdk_ftl.so.9.0 00:04:12.012 SYMLINK libspdk_ftl.so 00:04:12.271 CC module/env_dpdk/env_dpdk_rpc.o 00:04:12.271 CC module/keyring/linux/keyring.o 00:04:12.271 CC module/scheduler/gscheduler/gscheduler.o 00:04:12.271 CC module/keyring/file/keyring.o 00:04:12.271 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:12.271 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:12.271 CC module/accel/error/accel_error.o 00:04:12.271 CC module/sock/posix/posix.o 00:04:12.271 CC module/blob/bdev/blob_bdev.o 00:04:12.271 CC module/fsdev/aio/fsdev_aio.o 00:04:12.271 LIB libspdk_env_dpdk_rpc.a 00:04:12.271 SO libspdk_env_dpdk_rpc.so.6.0 00:04:12.271 CC module/keyring/linux/keyring_rpc.o 00:04:12.271 SYMLINK libspdk_env_dpdk_rpc.so 00:04:12.271 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:12.271 LIB libspdk_scheduler_gscheduler.a 00:04:12.271 CC module/keyring/file/keyring_rpc.o 00:04:12.271 LIB libspdk_scheduler_dpdk_governor.a 00:04:12.271 LIB libspdk_scheduler_dynamic.a 00:04:12.271 CC module/accel/error/accel_error_rpc.o 00:04:12.271 SO libspdk_scheduler_gscheduler.so.4.0 00:04:12.271 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:12.271 SO libspdk_scheduler_dynamic.so.4.0 00:04:12.530 SYMLINK libspdk_scheduler_gscheduler.so 00:04:12.530 LIB libspdk_blob_bdev.a 00:04:12.530 SYMLINK libspdk_scheduler_dynamic.so 00:04:12.530 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:12.530 CC module/fsdev/aio/linux_aio_mgr.o 00:04:12.530 LIB libspdk_keyring_linux.a 00:04:12.530 SO libspdk_blob_bdev.so.12.0 00:04:12.530 LIB libspdk_accel_error.a 00:04:12.530 LIB libspdk_keyring_file.a 00:04:12.530 SO libspdk_keyring_linux.so.1.0 00:04:12.530 SO libspdk_keyring_file.so.2.0 00:04:12.530 SO libspdk_accel_error.so.2.0 00:04:12.530 SYMLINK libspdk_blob_bdev.so 00:04:12.530 SYMLINK libspdk_keyring_linux.so 00:04:12.530 SYMLINK libspdk_keyring_file.so 00:04:12.530 SYMLINK libspdk_accel_error.so 00:04:12.530 CC module/accel/dsa/accel_dsa.o 00:04:12.530 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.530 CC module/accel/ioat/accel_ioat.o 00:04:12.530 CC module/accel/iaa/accel_iaa.o 00:04:12.788 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.788 CC module/bdev/error/vbdev_error.o 00:04:12.788 CC module/bdev/delay/vbdev_delay.o 00:04:12.788 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.788 CC module/bdev/gpt/gpt.o 00:04:12.788 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.788 CC module/bdev/gpt/vbdev_gpt.o 00:04:12.788 LIB libspdk_accel_dsa.a 00:04:12.788 LIB libspdk_accel_iaa.a 00:04:12.788 SO libspdk_accel_dsa.so.5.0 00:04:12.788 SO libspdk_accel_iaa.so.3.0 00:04:12.788 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:12.788 LIB libspdk_accel_ioat.a 00:04:12.788 SYMLINK libspdk_accel_dsa.so 00:04:12.788 SYMLINK libspdk_accel_iaa.so 00:04:12.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:12.789 SO libspdk_accel_ioat.so.6.0 00:04:12.789 CC module/bdev/error/vbdev_error_rpc.o 00:04:13.047 SYMLINK libspdk_accel_ioat.so 00:04:13.047 LIB libspdk_sock_posix.a 00:04:13.047 LIB libspdk_fsdev_aio.a 00:04:13.047 SO libspdk_sock_posix.so.6.0 00:04:13.047 LIB libspdk_blobfs_bdev.a 00:04:13.047 SO libspdk_fsdev_aio.so.1.0 00:04:13.047 LIB libspdk_bdev_gpt.a 00:04:13.047 SO libspdk_blobfs_bdev.so.6.0 00:04:13.047 SO libspdk_bdev_gpt.so.6.0 00:04:13.047 SYMLINK libspdk_fsdev_aio.so 00:04:13.047 LIB libspdk_bdev_error.a 00:04:13.047 SYMLINK libspdk_blobfs_bdev.so 00:04:13.047 SYMLINK libspdk_sock_posix.so 00:04:13.047 SYMLINK libspdk_bdev_gpt.so 00:04:13.047 CC module/bdev/malloc/bdev_malloc.o 00:04:13.047 LIB libspdk_bdev_delay.a 00:04:13.047 CC module/bdev/lvol/vbdev_lvol.o 00:04:13.047 SO libspdk_bdev_error.so.6.0 00:04:13.047 CC module/bdev/null/bdev_null.o 00:04:13.047 SO libspdk_bdev_delay.so.6.0 00:04:13.047 CC module/bdev/nvme/bdev_nvme.o 00:04:13.047 SYMLINK libspdk_bdev_error.so 00:04:13.047 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:13.047 SYMLINK libspdk_bdev_delay.so 00:04:13.047 CC module/bdev/nvme/nvme_rpc.o 00:04:13.047 CC module/bdev/passthru/vbdev_passthru.o 00:04:13.305 CC module/bdev/split/vbdev_split.o 00:04:13.305 CC module/bdev/raid/bdev_raid.o 00:04:13.305 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.305 CC module/bdev/null/bdev_null_rpc.o 00:04:13.305 CC module/bdev/nvme/bdev_mdns_client.o 00:04:13.305 CC module/bdev/split/vbdev_split_rpc.o 00:04:13.305 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:13.564 CC module/bdev/raid/bdev_raid_rpc.o 00:04:13.564 LIB libspdk_bdev_split.a 00:04:13.564 LIB libspdk_bdev_null.a 00:04:13.564 SO libspdk_bdev_split.so.6.0 00:04:13.564 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:13.564 LIB libspdk_bdev_malloc.a 00:04:13.564 SO libspdk_bdev_null.so.6.0 00:04:13.564 SO libspdk_bdev_malloc.so.6.0 00:04:13.564 SYMLINK libspdk_bdev_split.so 00:04:13.564 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.564 SYMLINK libspdk_bdev_null.so 00:04:13.564 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:13.564 SYMLINK libspdk_bdev_malloc.so 00:04:13.564 LIB libspdk_bdev_passthru.a 00:04:13.564 LIB libspdk_bdev_zone_block.a 00:04:13.564 SO libspdk_bdev_passthru.so.6.0 00:04:13.564 CC module/bdev/xnvme/bdev_xnvme.o 00:04:13.564 SO libspdk_bdev_zone_block.so.6.0 00:04:13.564 CC module/bdev/aio/bdev_aio.o 00:04:13.822 CC module/bdev/ftl/bdev_ftl.o 00:04:13.822 SYMLINK libspdk_bdev_passthru.so 00:04:13.822 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:13.822 SYMLINK libspdk_bdev_zone_block.so 00:04:13.822 CC module/bdev/nvme/vbdev_opal.o 00:04:13.822 CC module/bdev/iscsi/bdev_iscsi.o 00:04:13.822 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:13.822 LIB libspdk_bdev_lvol.a 00:04:13.822 SO libspdk_bdev_lvol.so.6.0 00:04:13.822 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:13.822 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:14.081 SYMLINK libspdk_bdev_lvol.so 00:04:14.081 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.081 CC module/bdev/raid/bdev_raid_sb.o 00:04:14.081 LIB libspdk_bdev_ftl.a 00:04:14.081 CC module/bdev/raid/raid0.o 00:04:14.081 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.081 SO libspdk_bdev_ftl.so.6.0 00:04:14.081 SYMLINK libspdk_bdev_ftl.so 00:04:14.081 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.081 LIB libspdk_bdev_xnvme.a 00:04:14.081 SO libspdk_bdev_xnvme.so.3.0 00:04:14.081 CC module/bdev/raid/raid1.o 00:04:14.081 LIB libspdk_bdev_aio.a 00:04:14.081 SO libspdk_bdev_aio.so.6.0 00:04:14.081 SYMLINK libspdk_bdev_xnvme.so 00:04:14.081 CC module/bdev/raid/concat.o 00:04:14.081 LIB libspdk_bdev_iscsi.a 00:04:14.081 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:14.339 SO libspdk_bdev_iscsi.so.6.0 00:04:14.339 SYMLINK libspdk_bdev_aio.so 00:04:14.339 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:14.339 SYMLINK libspdk_bdev_iscsi.so 00:04:14.339 LIB libspdk_bdev_virtio.a 00:04:14.339 SO libspdk_bdev_virtio.so.6.0 00:04:14.339 LIB libspdk_bdev_raid.a 00:04:14.339 SYMLINK libspdk_bdev_virtio.so 00:04:14.339 SO libspdk_bdev_raid.so.6.0 00:04:14.598 SYMLINK libspdk_bdev_raid.so 00:04:15.531 LIB libspdk_bdev_nvme.a 00:04:15.531 SO libspdk_bdev_nvme.so.7.1 00:04:15.531 SYMLINK libspdk_bdev_nvme.so 00:04:15.789 CC module/event/subsystems/keyring/keyring.o 00:04:15.789 CC module/event/subsystems/scheduler/scheduler.o 00:04:15.789 CC module/event/subsystems/vmd/vmd.o 00:04:15.789 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:15.789 CC module/event/subsystems/sock/sock.o 00:04:15.789 CC module/event/subsystems/iobuf/iobuf.o 00:04:15.789 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:15.789 CC module/event/subsystems/fsdev/fsdev.o 00:04:15.789 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:16.047 LIB libspdk_event_scheduler.a 00:04:16.047 LIB libspdk_event_fsdev.a 00:04:16.047 LIB libspdk_event_sock.a 00:04:16.047 LIB libspdk_event_vmd.a 00:04:16.047 LIB libspdk_event_iobuf.a 00:04:16.047 SO libspdk_event_scheduler.so.4.0 00:04:16.047 SO libspdk_event_sock.so.5.0 00:04:16.047 SO libspdk_event_fsdev.so.1.0 00:04:16.047 LIB libspdk_event_vhost_blk.a 00:04:16.047 SO libspdk_event_vmd.so.6.0 00:04:16.047 LIB libspdk_event_keyring.a 00:04:16.047 SO libspdk_event_iobuf.so.3.0 00:04:16.047 SO libspdk_event_vhost_blk.so.3.0 00:04:16.047 SO libspdk_event_keyring.so.1.0 00:04:16.047 SYMLINK libspdk_event_scheduler.so 00:04:16.047 SYMLINK libspdk_event_fsdev.so 00:04:16.047 SYMLINK libspdk_event_sock.so 00:04:16.047 SYMLINK libspdk_event_vmd.so 00:04:16.047 SYMLINK libspdk_event_vhost_blk.so 00:04:16.047 SYMLINK libspdk_event_iobuf.so 00:04:16.047 SYMLINK libspdk_event_keyring.so 00:04:16.305 CC module/event/subsystems/accel/accel.o 00:04:16.564 LIB libspdk_event_accel.a 00:04:16.564 SO libspdk_event_accel.so.6.0 00:04:16.564 SYMLINK libspdk_event_accel.so 00:04:16.823 CC module/event/subsystems/bdev/bdev.o 00:04:16.823 LIB libspdk_event_bdev.a 00:04:16.823 SO libspdk_event_bdev.so.6.0 00:04:17.081 SYMLINK libspdk_event_bdev.so 00:04:17.081 CC module/event/subsystems/scsi/scsi.o 00:04:17.081 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:17.081 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:17.081 CC module/event/subsystems/nbd/nbd.o 00:04:17.081 CC module/event/subsystems/ublk/ublk.o 00:04:17.339 LIB libspdk_event_nbd.a 00:04:17.339 LIB libspdk_event_scsi.a 00:04:17.339 SO libspdk_event_nbd.so.6.0 00:04:17.339 LIB libspdk_event_ublk.a 00:04:17.339 SO libspdk_event_scsi.so.6.0 00:04:17.339 SO libspdk_event_ublk.so.3.0 00:04:17.339 SYMLINK libspdk_event_scsi.so 00:04:17.339 SYMLINK libspdk_event_nbd.so 00:04:17.339 SYMLINK libspdk_event_ublk.so 00:04:17.339 LIB libspdk_event_nvmf.a 00:04:17.339 SO libspdk_event_nvmf.so.6.0 00:04:17.339 SYMLINK libspdk_event_nvmf.so 00:04:17.598 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:17.598 CC module/event/subsystems/iscsi/iscsi.o 00:04:17.598 LIB libspdk_event_vhost_scsi.a 00:04:17.598 SO libspdk_event_vhost_scsi.so.3.0 00:04:17.598 LIB libspdk_event_iscsi.a 00:04:17.598 SO libspdk_event_iscsi.so.6.0 00:04:17.598 SYMLINK libspdk_event_vhost_scsi.so 00:04:17.857 SYMLINK libspdk_event_iscsi.so 00:04:17.857 SO libspdk.so.6.0 00:04:17.857 SYMLINK libspdk.so 00:04:18.115 CC app/spdk_lspci/spdk_lspci.o 00:04:18.115 CXX app/trace/trace.o 00:04:18.115 CC app/trace_record/trace_record.o 00:04:18.115 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.115 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:18.115 CC app/nvmf_tgt/nvmf_main.o 00:04:18.115 CC examples/util/zipf/zipf.o 00:04:18.115 CC test/thread/poller_perf/poller_perf.o 00:04:18.115 CC app/spdk_tgt/spdk_tgt.o 00:04:18.115 CC examples/ioat/perf/perf.o 00:04:18.115 LINK spdk_lspci 00:04:18.115 LINK iscsi_tgt 00:04:18.375 LINK interrupt_tgt 00:04:18.375 LINK poller_perf 00:04:18.375 LINK zipf 00:04:18.375 LINK nvmf_tgt 00:04:18.375 LINK spdk_trace_record 00:04:18.375 LINK spdk_tgt 00:04:18.375 LINK ioat_perf 00:04:18.375 CC app/spdk_nvme_perf/perf.o 00:04:18.375 LINK spdk_trace 00:04:18.375 TEST_HEADER include/spdk/accel.h 00:04:18.375 TEST_HEADER include/spdk/accel_module.h 00:04:18.375 TEST_HEADER include/spdk/assert.h 00:04:18.375 TEST_HEADER include/spdk/barrier.h 00:04:18.375 TEST_HEADER include/spdk/base64.h 00:04:18.375 TEST_HEADER include/spdk/bdev.h 00:04:18.375 TEST_HEADER include/spdk/bdev_module.h 00:04:18.375 TEST_HEADER include/spdk/bdev_zone.h 00:04:18.375 TEST_HEADER include/spdk/bit_array.h 00:04:18.375 TEST_HEADER include/spdk/bit_pool.h 00:04:18.375 TEST_HEADER include/spdk/blob_bdev.h 00:04:18.375 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:18.375 TEST_HEADER include/spdk/blobfs.h 00:04:18.375 TEST_HEADER include/spdk/blob.h 00:04:18.375 TEST_HEADER include/spdk/conf.h 00:04:18.375 TEST_HEADER include/spdk/config.h 00:04:18.375 TEST_HEADER include/spdk/cpuset.h 00:04:18.375 TEST_HEADER include/spdk/crc16.h 00:04:18.375 TEST_HEADER include/spdk/crc32.h 00:04:18.375 TEST_HEADER include/spdk/crc64.h 00:04:18.375 TEST_HEADER include/spdk/dif.h 00:04:18.632 TEST_HEADER include/spdk/dma.h 00:04:18.632 TEST_HEADER include/spdk/endian.h 00:04:18.632 TEST_HEADER include/spdk/env_dpdk.h 00:04:18.632 TEST_HEADER include/spdk/env.h 00:04:18.632 TEST_HEADER include/spdk/event.h 00:04:18.632 TEST_HEADER include/spdk/fd_group.h 00:04:18.632 TEST_HEADER include/spdk/fd.h 00:04:18.632 TEST_HEADER include/spdk/file.h 00:04:18.632 TEST_HEADER include/spdk/fsdev.h 00:04:18.632 TEST_HEADER include/spdk/fsdev_module.h 00:04:18.632 TEST_HEADER include/spdk/ftl.h 00:04:18.632 CC examples/ioat/verify/verify.o 00:04:18.632 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:18.632 TEST_HEADER include/spdk/gpt_spec.h 00:04:18.632 TEST_HEADER include/spdk/hexlify.h 00:04:18.632 TEST_HEADER include/spdk/histogram_data.h 00:04:18.632 TEST_HEADER include/spdk/idxd.h 00:04:18.632 TEST_HEADER include/spdk/idxd_spec.h 00:04:18.632 TEST_HEADER include/spdk/init.h 00:04:18.632 TEST_HEADER include/spdk/ioat.h 00:04:18.633 TEST_HEADER include/spdk/ioat_spec.h 00:04:18.633 TEST_HEADER include/spdk/iscsi_spec.h 00:04:18.633 TEST_HEADER include/spdk/json.h 00:04:18.633 TEST_HEADER include/spdk/jsonrpc.h 00:04:18.633 TEST_HEADER include/spdk/keyring.h 00:04:18.633 TEST_HEADER include/spdk/keyring_module.h 00:04:18.633 TEST_HEADER include/spdk/likely.h 00:04:18.633 TEST_HEADER include/spdk/log.h 00:04:18.633 TEST_HEADER include/spdk/lvol.h 00:04:18.633 TEST_HEADER include/spdk/md5.h 00:04:18.633 CC test/dma/test_dma/test_dma.o 00:04:18.633 TEST_HEADER include/spdk/memory.h 00:04:18.633 TEST_HEADER include/spdk/mmio.h 00:04:18.633 TEST_HEADER include/spdk/nbd.h 00:04:18.633 TEST_HEADER include/spdk/net.h 00:04:18.633 TEST_HEADER include/spdk/notify.h 00:04:18.633 TEST_HEADER include/spdk/nvme.h 00:04:18.633 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.633 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.633 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.633 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.633 CC test/app/bdev_svc/bdev_svc.o 00:04:18.633 TEST_HEADER include/spdk/nvme_zns.h 00:04:18.633 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:18.633 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:18.633 TEST_HEADER include/spdk/nvmf.h 00:04:18.633 TEST_HEADER include/spdk/nvmf_spec.h 00:04:18.633 CC app/spdk_nvme_identify/identify.o 00:04:18.633 TEST_HEADER include/spdk/nvmf_transport.h 00:04:18.633 TEST_HEADER include/spdk/opal.h 00:04:18.633 CC test/event/event_perf/event_perf.o 00:04:18.633 TEST_HEADER include/spdk/opal_spec.h 00:04:18.633 TEST_HEADER include/spdk/pci_ids.h 00:04:18.633 TEST_HEADER include/spdk/pipe.h 00:04:18.633 TEST_HEADER include/spdk/queue.h 00:04:18.633 CC examples/thread/thread/thread_ex.o 00:04:18.633 TEST_HEADER include/spdk/reduce.h 00:04:18.633 TEST_HEADER include/spdk/rpc.h 00:04:18.633 TEST_HEADER include/spdk/scheduler.h 00:04:18.633 TEST_HEADER include/spdk/scsi.h 00:04:18.633 TEST_HEADER include/spdk/scsi_spec.h 00:04:18.633 TEST_HEADER include/spdk/sock.h 00:04:18.633 TEST_HEADER include/spdk/stdinc.h 00:04:18.633 TEST_HEADER include/spdk/string.h 00:04:18.633 TEST_HEADER include/spdk/thread.h 00:04:18.633 TEST_HEADER include/spdk/trace.h 00:04:18.633 TEST_HEADER include/spdk/trace_parser.h 00:04:18.633 TEST_HEADER include/spdk/tree.h 00:04:18.633 CC test/env/vtophys/vtophys.o 00:04:18.633 TEST_HEADER include/spdk/ublk.h 00:04:18.633 TEST_HEADER include/spdk/util.h 00:04:18.633 TEST_HEADER include/spdk/uuid.h 00:04:18.633 TEST_HEADER include/spdk/version.h 00:04:18.633 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:18.633 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.633 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:18.633 TEST_HEADER include/spdk/vhost.h 00:04:18.633 TEST_HEADER include/spdk/vmd.h 00:04:18.633 TEST_HEADER include/spdk/xor.h 00:04:18.633 TEST_HEADER include/spdk/zipf.h 00:04:18.633 CXX test/cpp_headers/accel.o 00:04:18.633 LINK event_perf 00:04:18.633 LINK bdev_svc 00:04:18.633 LINK verify 00:04:18.633 LINK vtophys 00:04:18.925 CXX test/cpp_headers/accel_module.o 00:04:18.925 LINK thread 00:04:18.925 CC test/event/reactor_perf/reactor_perf.o 00:04:18.925 CC test/event/reactor/reactor.o 00:04:18.925 CXX test/cpp_headers/assert.o 00:04:18.925 CC test/app/histogram_perf/histogram_perf.o 00:04:18.925 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.925 CXX test/cpp_headers/barrier.o 00:04:18.925 LINK reactor_perf 00:04:18.925 LINK test_dma 00:04:18.925 LINK reactor 00:04:19.183 LINK histogram_perf 00:04:19.183 LINK mem_callbacks 00:04:19.183 CC examples/sock/hello_world/hello_sock.o 00:04:19.183 CXX test/cpp_headers/base64.o 00:04:19.183 LINK spdk_nvme_perf 00:04:19.183 CC test/event/app_repeat/app_repeat.o 00:04:19.183 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.183 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:19.183 CXX test/cpp_headers/bdev.o 00:04:19.183 CC test/event/scheduler/scheduler.o 00:04:19.443 CC examples/idxd/perf/perf.o 00:04:19.443 LINK app_repeat 00:04:19.443 LINK hello_sock 00:04:19.443 LINK spdk_nvme_identify 00:04:19.443 LINK nvme_fuzz 00:04:19.443 LINK lsvmd 00:04:19.443 LINK env_dpdk_post_init 00:04:19.443 CXX test/cpp_headers/bdev_module.o 00:04:19.443 CXX test/cpp_headers/bdev_zone.o 00:04:19.443 CXX test/cpp_headers/bit_array.o 00:04:19.443 LINK scheduler 00:04:19.443 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.443 CC examples/vmd/led/led.o 00:04:19.701 CC app/spdk_nvme_discover/discovery_aer.o 00:04:19.701 LINK idxd_perf 00:04:19.701 CC test/env/memory/memory_ut.o 00:04:19.701 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:19.701 CXX test/cpp_headers/bit_pool.o 00:04:19.701 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:19.701 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:19.701 CC test/app/jsoncat/jsoncat.o 00:04:19.701 LINK led 00:04:19.701 LINK hello_fsdev 00:04:19.701 CXX test/cpp_headers/blob_bdev.o 00:04:19.701 LINK spdk_nvme_discover 00:04:19.701 LINK jsoncat 00:04:19.701 CC examples/accel/perf/accel_perf.o 00:04:19.960 CC app/spdk_top/spdk_top.o 00:04:19.960 CC test/env/pci/pci_ut.o 00:04:19.960 CXX test/cpp_headers/blobfs_bdev.o 00:04:19.960 CC test/app/stub/stub.o 00:04:19.960 CC examples/nvme/hello_world/hello_world.o 00:04:19.960 CC examples/blob/hello_world/hello_blob.o 00:04:19.960 LINK vhost_fuzz 00:04:20.218 CXX test/cpp_headers/blobfs.o 00:04:20.218 LINK stub 00:04:20.218 CXX test/cpp_headers/blob.o 00:04:20.218 CXX test/cpp_headers/conf.o 00:04:20.218 LINK hello_world 00:04:20.218 CXX test/cpp_headers/config.o 00:04:20.218 LINK hello_blob 00:04:20.218 LINK pci_ut 00:04:20.218 CXX test/cpp_headers/cpuset.o 00:04:20.218 LINK accel_perf 00:04:20.476 CXX test/cpp_headers/crc16.o 00:04:20.476 CC examples/nvme/reconnect/reconnect.o 00:04:20.476 CC examples/blob/cli/blobcli.o 00:04:20.476 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.476 CC test/rpc_client/rpc_client_test.o 00:04:20.476 CXX test/cpp_headers/crc32.o 00:04:20.476 CC app/vhost/vhost.o 00:04:20.477 LINK spdk_top 00:04:20.735 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.735 LINK memory_ut 00:04:20.735 LINK rpc_client_test 00:04:20.735 CXX test/cpp_headers/crc64.o 00:04:20.735 LINK vhost 00:04:20.735 LINK reconnect 00:04:20.735 CXX test/cpp_headers/dif.o 00:04:20.735 LINK hello_bdev 00:04:20.735 CC test/accel/dif/dif.o 00:04:20.735 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.735 CC app/spdk_dd/spdk_dd.o 00:04:20.994 LINK blobcli 00:04:20.994 CXX test/cpp_headers/dma.o 00:04:20.994 LINK iscsi_fuzz 00:04:20.994 CC examples/nvme/arbitration/arbitration.o 00:04:20.994 LINK nvme_manage 00:04:20.994 CC app/fio/nvme/fio_plugin.o 00:04:20.994 CXX test/cpp_headers/endian.o 00:04:20.994 CC app/fio/bdev/fio_plugin.o 00:04:20.994 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.994 CC examples/nvme/hotplug/hotplug.o 00:04:20.994 CXX test/cpp_headers/env_dpdk.o 00:04:21.252 CC test/blobfs/mkfs/mkfs.o 00:04:21.252 LINK arbitration 00:04:21.252 LINK spdk_dd 00:04:21.252 CXX test/cpp_headers/env.o 00:04:21.252 LINK cmb_copy 00:04:21.252 LINK hotplug 00:04:21.511 LINK mkfs 00:04:21.511 CXX test/cpp_headers/event.o 00:04:21.511 LINK spdk_bdev 00:04:21.511 LINK spdk_nvme 00:04:21.511 LINK dif 00:04:21.511 CC examples/nvme/abort/abort.o 00:04:21.511 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.511 CXX test/cpp_headers/fd_group.o 00:04:21.511 CC test/nvme/aer/aer.o 00:04:21.511 CXX test/cpp_headers/fd.o 00:04:21.511 CC test/lvol/esnap/esnap.o 00:04:21.511 CC test/nvme/reset/reset.o 00:04:21.511 CC test/nvme/sgl/sgl.o 00:04:21.511 LINK bdevperf 00:04:21.770 CXX test/cpp_headers/file.o 00:04:21.770 CC test/nvme/e2edp/nvme_dp.o 00:04:21.770 LINK pmr_persistence 00:04:21.770 CC test/nvme/overhead/overhead.o 00:04:21.770 CXX test/cpp_headers/fsdev.o 00:04:21.770 LINK aer 00:04:21.770 LINK reset 00:04:21.770 CXX test/cpp_headers/fsdev_module.o 00:04:21.770 LINK sgl 00:04:21.770 CXX test/cpp_headers/ftl.o 00:04:21.770 LINK abort 00:04:22.028 CC test/nvme/err_injection/err_injection.o 00:04:22.028 LINK nvme_dp 00:04:22.028 CXX test/cpp_headers/fuse_dispatcher.o 00:04:22.028 CC test/nvme/startup/startup.o 00:04:22.028 CXX test/cpp_headers/gpt_spec.o 00:04:22.028 CC test/bdev/bdevio/bdevio.o 00:04:22.028 LINK overhead 00:04:22.028 CC test/nvme/reserve/reserve.o 00:04:22.028 LINK err_injection 00:04:22.028 CXX test/cpp_headers/hexlify.o 00:04:22.028 LINK startup 00:04:22.028 CC test/nvme/simple_copy/simple_copy.o 00:04:22.287 CC examples/nvmf/nvmf/nvmf.o 00:04:22.287 CC test/nvme/connect_stress/connect_stress.o 00:04:22.287 CC test/nvme/boot_partition/boot_partition.o 00:04:22.287 LINK reserve 00:04:22.287 CC test/nvme/compliance/nvme_compliance.o 00:04:22.287 CXX test/cpp_headers/histogram_data.o 00:04:22.287 CC test/nvme/fused_ordering/fused_ordering.o 00:04:22.287 LINK simple_copy 00:04:22.287 LINK boot_partition 00:04:22.287 LINK connect_stress 00:04:22.287 LINK bdevio 00:04:22.287 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.287 CXX test/cpp_headers/idxd.o 00:04:22.544 LINK fused_ordering 00:04:22.544 CXX test/cpp_headers/idxd_spec.o 00:04:22.544 LINK nvmf 00:04:22.544 CXX test/cpp_headers/init.o 00:04:22.544 CC test/nvme/cuse/cuse.o 00:04:22.544 CC test/nvme/fdp/fdp.o 00:04:22.545 CXX test/cpp_headers/ioat.o 00:04:22.545 LINK nvme_compliance 00:04:22.545 LINK doorbell_aers 00:04:22.545 CXX test/cpp_headers/ioat_spec.o 00:04:22.545 CXX test/cpp_headers/iscsi_spec.o 00:04:22.545 CXX test/cpp_headers/json.o 00:04:22.545 CXX test/cpp_headers/jsonrpc.o 00:04:22.545 CXX test/cpp_headers/keyring.o 00:04:22.803 CXX test/cpp_headers/keyring_module.o 00:04:22.803 CXX test/cpp_headers/likely.o 00:04:22.803 CXX test/cpp_headers/log.o 00:04:22.803 CXX test/cpp_headers/lvol.o 00:04:22.803 CXX test/cpp_headers/md5.o 00:04:22.803 CXX test/cpp_headers/memory.o 00:04:22.803 CXX test/cpp_headers/mmio.o 00:04:22.803 CXX test/cpp_headers/nbd.o 00:04:22.803 CXX test/cpp_headers/net.o 00:04:22.803 CXX test/cpp_headers/notify.o 00:04:22.803 CXX test/cpp_headers/nvme.o 00:04:22.803 LINK fdp 00:04:22.803 CXX test/cpp_headers/nvme_intel.o 00:04:22.803 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.803 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.062 CXX test/cpp_headers/nvme_spec.o 00:04:23.062 CXX test/cpp_headers/nvme_zns.o 00:04:23.062 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.062 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.062 CXX test/cpp_headers/nvmf.o 00:04:23.062 CXX test/cpp_headers/nvmf_spec.o 00:04:23.062 CXX test/cpp_headers/nvmf_transport.o 00:04:23.062 CXX test/cpp_headers/opal.o 00:04:23.062 CXX test/cpp_headers/opal_spec.o 00:04:23.062 CXX test/cpp_headers/pci_ids.o 00:04:23.062 CXX test/cpp_headers/pipe.o 00:04:23.062 CXX test/cpp_headers/queue.o 00:04:23.062 CXX test/cpp_headers/reduce.o 00:04:23.062 CXX test/cpp_headers/rpc.o 00:04:23.062 CXX test/cpp_headers/scheduler.o 00:04:23.062 CXX test/cpp_headers/scsi.o 00:04:23.062 CXX test/cpp_headers/scsi_spec.o 00:04:23.062 CXX test/cpp_headers/sock.o 00:04:23.062 CXX test/cpp_headers/stdinc.o 00:04:23.320 CXX test/cpp_headers/string.o 00:04:23.320 CXX test/cpp_headers/thread.o 00:04:23.320 CXX test/cpp_headers/trace.o 00:04:23.320 CXX test/cpp_headers/trace_parser.o 00:04:23.320 CXX test/cpp_headers/tree.o 00:04:23.320 CXX test/cpp_headers/ublk.o 00:04:23.320 CXX test/cpp_headers/util.o 00:04:23.320 CXX test/cpp_headers/uuid.o 00:04:23.320 CXX test/cpp_headers/version.o 00:04:23.320 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.320 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.320 CXX test/cpp_headers/vhost.o 00:04:23.320 CXX test/cpp_headers/vmd.o 00:04:23.320 CXX test/cpp_headers/xor.o 00:04:23.320 CXX test/cpp_headers/zipf.o 00:04:23.888 LINK cuse 00:04:26.428 LINK esnap 00:04:26.690 00:04:26.690 real 1m4.935s 00:04:26.690 user 5m7.861s 00:04:26.690 sys 0m51.419s 00:04:26.690 18:17:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:26.690 18:17:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:26.690 ************************************ 00:04:26.690 END TEST make 00:04:26.690 ************************************ 00:04:26.690 18:17:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:26.690 18:17:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:26.690 18:17:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:26.690 18:17:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.690 18:17:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:26.690 18:17:46 -- pm/common@44 -- $ pid=5814 00:04:26.690 18:17:46 -- pm/common@50 -- $ kill -TERM 5814 00:04:26.690 18:17:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.690 18:17:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:26.690 18:17:46 -- pm/common@44 -- $ pid=5815 00:04:26.690 18:17:46 -- pm/common@50 -- $ kill -TERM 5815 00:04:26.690 18:17:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:26.690 18:17:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:26.690 18:17:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.690 18:17:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.690 18:17:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.690 18:17:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.690 18:17:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.690 18:17:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.690 18:17:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.690 18:17:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.690 18:17:46 -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.690 18:17:46 -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.690 18:17:46 -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.690 18:17:46 -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.690 18:17:46 -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.690 18:17:46 -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.690 18:17:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.690 18:17:46 -- scripts/common.sh@344 -- # case "$op" in 00:04:26.690 18:17:46 -- scripts/common.sh@345 -- # : 1 00:04:26.690 18:17:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.690 18:17:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.690 18:17:46 -- scripts/common.sh@365 -- # decimal 1 00:04:26.690 18:17:46 -- scripts/common.sh@353 -- # local d=1 00:04:26.690 18:17:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.690 18:17:46 -- scripts/common.sh@355 -- # echo 1 00:04:26.690 18:17:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.691 18:17:46 -- scripts/common.sh@366 -- # decimal 2 00:04:26.691 18:17:46 -- scripts/common.sh@353 -- # local d=2 00:04:26.691 18:17:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.691 18:17:46 -- scripts/common.sh@355 -- # echo 2 00:04:26.691 18:17:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.691 18:17:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.691 18:17:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.691 18:17:46 -- scripts/common.sh@368 -- # return 0 00:04:26.691 18:17:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.691 18:17:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.691 --rc genhtml_branch_coverage=1 00:04:26.691 --rc genhtml_function_coverage=1 00:04:26.691 --rc genhtml_legend=1 00:04:26.691 --rc geninfo_all_blocks=1 00:04:26.691 --rc geninfo_unexecuted_blocks=1 00:04:26.691 00:04:26.691 ' 00:04:26.691 18:17:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.691 --rc genhtml_branch_coverage=1 00:04:26.691 --rc genhtml_function_coverage=1 00:04:26.691 --rc genhtml_legend=1 00:04:26.691 --rc geninfo_all_blocks=1 00:04:26.691 --rc geninfo_unexecuted_blocks=1 00:04:26.691 00:04:26.691 ' 00:04:26.691 18:17:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.691 --rc genhtml_branch_coverage=1 00:04:26.691 --rc genhtml_function_coverage=1 00:04:26.691 --rc genhtml_legend=1 00:04:26.691 --rc geninfo_all_blocks=1 00:04:26.691 --rc geninfo_unexecuted_blocks=1 00:04:26.691 00:04:26.691 ' 00:04:26.691 18:17:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.691 --rc genhtml_branch_coverage=1 00:04:26.691 --rc genhtml_function_coverage=1 00:04:26.691 --rc genhtml_legend=1 00:04:26.691 --rc geninfo_all_blocks=1 00:04:26.691 --rc geninfo_unexecuted_blocks=1 00:04:26.691 00:04:26.691 ' 00:04:26.691 18:17:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.691 18:17:46 -- nvmf/common.sh@7 -- # uname -s 00:04:26.691 18:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.691 18:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.691 18:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.691 18:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.691 18:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.691 18:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.691 18:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.691 18:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.691 18:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.691 18:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.691 18:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:04:26.691 18:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:04:26.691 18:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.691 18:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.691 18:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.691 18:17:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.691 18:17:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.691 18:17:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.691 18:17:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.691 18:17:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.691 18:17:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.691 18:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 18:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 18:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 18:17:46 -- paths/export.sh@5 -- # export PATH 00:04:26.691 18:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 18:17:46 -- nvmf/common.sh@51 -- # : 0 00:04:26.691 18:17:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.691 18:17:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.691 18:17:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.691 18:17:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.691 18:17:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.691 18:17:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.691 18:17:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.691 18:17:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.691 18:17:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.691 18:17:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:26.691 18:17:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:26.691 18:17:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:26.691 18:17:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:26.691 18:17:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.691 18:17:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:26.691 18:17:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.691 18:17:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:26.953 18:17:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:26.953 18:17:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:26.953 18:17:46 -- spdk/autotest.sh@48 -- # udevadm_pid=66604 00:04:26.953 18:17:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:26.953 18:17:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:26.953 18:17:46 -- pm/common@17 -- # local monitor 00:04:26.953 18:17:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.953 18:17:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.953 18:17:46 -- pm/common@25 -- # sleep 1 00:04:26.953 18:17:46 -- pm/common@21 -- # date +%s 00:04:26.953 18:17:46 -- pm/common@21 -- # date +%s 00:04:26.953 18:17:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732904266 00:04:26.953 18:17:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732904266 00:04:26.953 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732904266_collect-vmstat.pm.log 00:04:26.953 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732904266_collect-cpu-load.pm.log 00:04:27.897 18:17:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:27.897 18:17:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:27.897 18:17:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.897 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:04:27.897 18:17:47 -- spdk/autotest.sh@59 -- # create_test_list 00:04:27.897 18:17:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:27.897 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:04:27.897 18:17:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:27.897 18:17:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:27.897 18:17:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:27.897 18:17:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:27.897 18:17:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:27.897 18:17:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:27.897 18:17:47 -- common/autotest_common.sh@1457 -- # uname 00:04:27.897 18:17:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:27.897 18:17:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:27.897 18:17:47 -- common/autotest_common.sh@1477 -- # uname 00:04:27.897 18:17:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:27.897 18:17:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:27.897 18:17:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:27.897 lcov: LCOV version 1.15 00:04:27.897 18:17:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:42.804 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:42.804 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:57.697 18:18:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:57.697 18:18:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.697 18:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.697 18:18:16 -- spdk/autotest.sh@78 -- # rm -f 00:04:57.697 18:18:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.959 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:57.959 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:57.959 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:57.959 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:58.221 18:18:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:58.221 18:18:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:58.221 18:18:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:58.221 18:18:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:58.221 18:18:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:58.221 18:18:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:58.221 18:18:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:58.221 18:18:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:58.221 18:18:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.221 18:18:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.221 18:18:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:58.221 18:18:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:58.221 18:18:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:58.221 No valid GPT data, bailing 00:04:58.221 18:18:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.221 18:18:17 -- scripts/common.sh@394 -- # pt= 00:04:58.221 18:18:17 -- scripts/common.sh@395 -- # return 1 00:04:58.221 18:18:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:58.221 1+0 records in 00:04:58.221 1+0 records out 00:04:58.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228966 s, 45.8 MB/s 00:04:58.221 18:18:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.221 18:18:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.221 18:18:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:58.221 18:18:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:58.221 18:18:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:58.221 No valid GPT data, bailing 00:04:58.221 18:18:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:58.221 18:18:18 -- scripts/common.sh@394 -- # pt= 00:04:58.221 18:18:18 -- scripts/common.sh@395 -- # return 1 00:04:58.221 18:18:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:58.221 1+0 records in 00:04:58.221 1+0 records out 00:04:58.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602153 s, 174 MB/s 00:04:58.221 18:18:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.221 18:18:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.221 18:18:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:58.221 18:18:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:58.221 18:18:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:58.221 No valid GPT data, bailing 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # pt= 00:04:58.482 18:18:18 -- scripts/common.sh@395 -- # return 1 00:04:58.482 18:18:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:58.482 1+0 records in 00:04:58.482 1+0 records out 00:04:58.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00570398 s, 184 MB/s 00:04:58.482 18:18:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.482 18:18:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.482 18:18:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:58.482 18:18:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:58.482 18:18:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:58.482 No valid GPT data, bailing 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # pt= 00:04:58.482 18:18:18 -- scripts/common.sh@395 -- # return 1 00:04:58.482 18:18:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:58.482 1+0 records in 00:04:58.482 1+0 records out 00:04:58.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061061 s, 172 MB/s 00:04:58.482 18:18:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.482 18:18:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.482 18:18:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:58.482 18:18:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:58.482 18:18:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:58.482 No valid GPT data, bailing 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:58.482 18:18:18 -- scripts/common.sh@394 -- # pt= 00:04:58.482 18:18:18 -- scripts/common.sh@395 -- # return 1 00:04:58.482 18:18:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:58.482 1+0 records in 00:04:58.482 1+0 records out 00:04:58.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680511 s, 154 MB/s 00:04:58.482 18:18:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.482 18:18:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:58.482 18:18:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:58.482 18:18:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:58.482 18:18:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:58.482 No valid GPT data, bailing 00:04:58.744 18:18:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:58.744 18:18:18 -- scripts/common.sh@394 -- # pt= 00:04:58.744 18:18:18 -- scripts/common.sh@395 -- # return 1 00:04:58.744 18:18:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:58.744 1+0 records in 00:04:58.744 1+0 records out 00:04:58.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638737 s, 164 MB/s 00:04:58.744 18:18:18 -- spdk/autotest.sh@105 -- # sync 00:04:58.744 18:18:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:58.744 18:18:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:58.744 18:18:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.657 18:18:20 -- spdk/autotest.sh@111 -- # uname -s 00:05:00.657 18:18:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:00.657 18:18:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:00.657 18:18:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:00.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.179 Hugepages 00:05:01.179 node hugesize free / total 00:05:01.179 node0 1048576kB 0 / 0 00:05:01.179 node0 2048kB 0 / 0 00:05:01.179 00:05:01.179 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.179 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.179 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.440 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:01.440 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:01.440 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:01.440 18:18:21 -- spdk/autotest.sh@117 -- # uname -s 00:05:01.440 18:18:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:01.440 18:18:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:01.440 18:18:21 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.273 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.273 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.534 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.534 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.534 18:18:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:03.478 18:18:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:03.478 18:18:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:03.478 18:18:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.478 18:18:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:03.478 18:18:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:03.478 18:18:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:03.478 18:18:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.478 18:18:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.478 18:18:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.478 18:18:23 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:03.478 18:18:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:03.478 18:18:23 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.049 Waiting for block devices as requested 00:05:04.049 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.049 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.308 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.308 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.596 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:09.596 18:18:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.596 18:18:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.596 18:18:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.596 18:18:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.596 18:18:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1543 -- # continue 00:05:09.596 18:18:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.596 18:18:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.596 18:18:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.596 18:18:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.596 18:18:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.596 18:18:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.596 18:18:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.596 18:18:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.596 18:18:29 -- common/autotest_common.sh@1543 -- # continue 00:05:09.597 18:18:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.597 18:18:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.597 18:18:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.597 18:18:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.597 18:18:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1543 -- # continue 00:05:09.597 18:18:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.597 18:18:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.597 18:18:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.597 18:18:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.597 18:18:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.597 18:18:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.597 18:18:29 -- common/autotest_common.sh@1543 -- # continue 00:05:09.597 18:18:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:09.597 18:18:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.597 18:18:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.597 18:18:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:09.597 18:18:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.597 18:18:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.597 18:18:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.476 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.476 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.736 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.736 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.736 18:18:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:10.736 18:18:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.736 18:18:30 -- common/autotest_common.sh@10 -- # set +x 00:05:10.736 18:18:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:10.736 18:18:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:10.736 18:18:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.736 18:18:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:10.736 18:18:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:10.736 18:18:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:10.736 18:18:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:10.736 18:18:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:10.736 18:18:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.736 18:18:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.736 18:18:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.736 18:18:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.736 18:18:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.736 18:18:30 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:10.736 18:18:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:10.736 18:18:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.736 18:18:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.736 18:18:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.736 18:18:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.736 18:18:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.736 18:18:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.736 18:18:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:10.736 18:18:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.736 18:18:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.736 18:18:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:10.736 18:18:30 -- common/autotest_common.sh@1572 -- # return 0 00:05:10.736 18:18:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:10.736 18:18:30 -- common/autotest_common.sh@1580 -- # return 0 00:05:10.737 18:18:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:10.737 18:18:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:10.737 18:18:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.737 18:18:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.737 18:18:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:10.737 18:18:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.737 18:18:30 -- common/autotest_common.sh@10 -- # set +x 00:05:10.737 18:18:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:10.737 18:18:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:10.737 18:18:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.737 18:18:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.737 18:18:30 -- common/autotest_common.sh@10 -- # set +x 00:05:10.737 ************************************ 00:05:10.737 START TEST env 00:05:10.737 ************************************ 00:05:10.737 18:18:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:10.737 * Looking for test storage... 00:05:10.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:10.737 18:18:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.737 18:18:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.737 18:18:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.997 18:18:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.997 18:18:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.997 18:18:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.997 18:18:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.997 18:18:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.997 18:18:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.997 18:18:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.997 18:18:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.997 18:18:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.997 18:18:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.997 18:18:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.997 18:18:30 env -- scripts/common.sh@344 -- # case "$op" in 00:05:10.997 18:18:30 env -- scripts/common.sh@345 -- # : 1 00:05:10.997 18:18:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.997 18:18:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.997 18:18:30 env -- scripts/common.sh@365 -- # decimal 1 00:05:10.997 18:18:30 env -- scripts/common.sh@353 -- # local d=1 00:05:10.997 18:18:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.997 18:18:30 env -- scripts/common.sh@355 -- # echo 1 00:05:10.997 18:18:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.997 18:18:30 env -- scripts/common.sh@366 -- # decimal 2 00:05:10.997 18:18:30 env -- scripts/common.sh@353 -- # local d=2 00:05:10.997 18:18:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.997 18:18:30 env -- scripts/common.sh@355 -- # echo 2 00:05:10.997 18:18:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.997 18:18:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.997 18:18:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.997 18:18:30 env -- scripts/common.sh@368 -- # return 0 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.997 --rc genhtml_branch_coverage=1 00:05:10.997 --rc genhtml_function_coverage=1 00:05:10.997 --rc genhtml_legend=1 00:05:10.997 --rc geninfo_all_blocks=1 00:05:10.997 --rc geninfo_unexecuted_blocks=1 00:05:10.997 00:05:10.997 ' 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.997 --rc genhtml_branch_coverage=1 00:05:10.997 --rc genhtml_function_coverage=1 00:05:10.997 --rc genhtml_legend=1 00:05:10.997 --rc geninfo_all_blocks=1 00:05:10.997 --rc geninfo_unexecuted_blocks=1 00:05:10.997 00:05:10.997 ' 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.997 --rc genhtml_branch_coverage=1 00:05:10.997 --rc genhtml_function_coverage=1 00:05:10.997 --rc genhtml_legend=1 00:05:10.997 --rc geninfo_all_blocks=1 00:05:10.997 --rc geninfo_unexecuted_blocks=1 00:05:10.997 00:05:10.997 ' 00:05:10.997 18:18:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.998 --rc genhtml_branch_coverage=1 00:05:10.998 --rc genhtml_function_coverage=1 00:05:10.998 --rc genhtml_legend=1 00:05:10.998 --rc geninfo_all_blocks=1 00:05:10.998 --rc geninfo_unexecuted_blocks=1 00:05:10.998 00:05:10.998 ' 00:05:10.998 18:18:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:10.998 18:18:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.998 18:18:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.998 18:18:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.998 ************************************ 00:05:10.998 START TEST env_memory 00:05:10.998 ************************************ 00:05:10.998 18:18:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:10.998 00:05:10.998 00:05:10.998 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.998 http://cunit.sourceforge.net/ 00:05:10.998 00:05:10.998 00:05:10.998 Suite: memory 00:05:10.998 Test: alloc and free memory map ...[2024-11-29 18:18:30.771133] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.998 passed 00:05:10.998 Test: mem map translation ...[2024-11-29 18:18:30.810131] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.998 [2024-11-29 18:18:30.810292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.998 [2024-11-29 18:18:30.810405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.998 [2024-11-29 18:18:30.810486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.998 passed 00:05:10.998 Test: mem map registration ...[2024-11-29 18:18:30.878758] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:10.998 [2024-11-29 18:18:30.878897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:11.259 passed 00:05:11.259 Test: mem map adjacent registrations ...passed 00:05:11.259 00:05:11.259 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.259 suites 1 1 n/a 0 0 00:05:11.259 tests 4 4 4 0 0 00:05:11.259 asserts 152 152 152 0 n/a 00:05:11.259 00:05:11.259 Elapsed time = 0.233 seconds 00:05:11.259 ************************************ 00:05:11.259 END TEST env_memory 00:05:11.259 ************************************ 00:05:11.259 00:05:11.259 real 0m0.266s 00:05:11.259 user 0m0.244s 00:05:11.259 sys 0m0.016s 00:05:11.259 18:18:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.259 18:18:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.259 18:18:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.259 18:18:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.259 18:18:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.259 18:18:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.259 ************************************ 00:05:11.259 START TEST env_vtophys 00:05:11.259 ************************************ 00:05:11.259 18:18:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.259 EAL: lib.eal log level changed from notice to debug 00:05:11.259 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 1 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 2 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 3 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 4 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 5 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 6 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 7 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 8 as core 0 on socket 0 00:05:11.259 EAL: Detected lcore 9 as core 0 on socket 0 00:05:11.259 EAL: Maximum logical cores by configuration: 128 00:05:11.259 EAL: Detected CPU lcores: 10 00:05:11.259 EAL: Detected NUMA nodes: 1 00:05:11.259 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:11.259 EAL: Detected shared linkage of DPDK 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:11.259 EAL: Registered [vdev] bus. 00:05:11.259 EAL: bus.vdev log level changed from disabled to notice 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:11.259 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:11.259 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:11.259 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:11.259 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.259 EAL: No shared files mode enabled, IPC is disabled 00:05:11.259 EAL: Selected IOVA mode 'PA' 00:05:11.259 EAL: Probing VFIO support... 00:05:11.259 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.259 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:11.259 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.259 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.259 EAL: Setting up physically contiguous memory... 00:05:11.259 EAL: Setting maximum number of open files to 524288 00:05:11.259 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.259 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.259 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.259 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.259 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.259 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.259 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.259 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.259 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.259 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.259 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.259 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.259 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.259 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.259 EAL: Hugepages will be freed exactly as allocated. 00:05:11.259 EAL: No shared files mode enabled, IPC is disabled 00:05:11.259 EAL: No shared files mode enabled, IPC is disabled 00:05:11.530 EAL: TSC frequency is ~2600000 KHz 00:05:11.530 EAL: Main lcore 0 is ready (tid=7fb5ca618a40;cpuset=[0]) 00:05:11.530 EAL: Trying to obtain current memory policy. 00:05:11.530 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.530 EAL: Restoring previous memory policy: 0 00:05:11.530 EAL: request: mp_malloc_sync 00:05:11.530 EAL: No shared files mode enabled, IPC is disabled 00:05:11.530 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.530 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.530 EAL: No shared files mode enabled, IPC is disabled 00:05:11.530 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.530 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.531 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:11.531 00:05:11.531 00:05:11.531 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.531 http://cunit.sourceforge.net/ 00:05:11.531 00:05:11.531 00:05:11.531 Suite: components_suite 00:05:11.794 Test: vtophys_malloc_test ...passed 00:05:11.794 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.794 EAL: Trying to obtain current memory policy. 00:05:11.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.794 EAL: Restoring previous memory policy: 4 00:05:11.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.794 EAL: request: mp_malloc_sync 00:05:11.794 EAL: No shared files mode enabled, IPC is disabled 00:05:11.794 EAL: Heap on socket 0 was expanded by 130MB 00:05:12.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.054 EAL: request: mp_malloc_sync 00:05:12.054 EAL: No shared files mode enabled, IPC is disabled 00:05:12.054 EAL: Heap on socket 0 was shrunk by 130MB 00:05:12.054 EAL: Trying to obtain current memory policy. 00:05:12.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.054 EAL: Restoring previous memory policy: 4 00:05:12.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.054 EAL: request: mp_malloc_sync 00:05:12.054 EAL: No shared files mode enabled, IPC is disabled 00:05:12.054 EAL: Heap on socket 0 was expanded by 258MB 00:05:12.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.054 EAL: request: mp_malloc_sync 00:05:12.054 EAL: No shared files mode enabled, IPC is disabled 00:05:12.054 EAL: Heap on socket 0 was shrunk by 258MB 00:05:12.054 EAL: Trying to obtain current memory policy. 00:05:12.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.054 EAL: Restoring previous memory policy: 4 00:05:12.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.054 EAL: request: mp_malloc_sync 00:05:12.054 EAL: No shared files mode enabled, IPC is disabled 00:05:12.054 EAL: Heap on socket 0 was expanded by 514MB 00:05:12.055 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.315 EAL: request: mp_malloc_sync 00:05:12.315 EAL: No shared files mode enabled, IPC is disabled 00:05:12.315 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.315 EAL: Trying to obtain current memory policy. 00:05:12.315 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.315 EAL: Restoring previous memory policy: 4 00:05:12.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.315 EAL: request: mp_malloc_sync 00:05:12.315 EAL: No shared files mode enabled, IPC is disabled 00:05:12.315 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.577 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.577 passed 00:05:12.577 00:05:12.577 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.577 suites 1 1 n/a 0 0 00:05:12.577 tests 2 2 2 0 0 00:05:12.577 asserts 5722 5722 5722 0 n/a 00:05:12.577 00:05:12.577 Elapsed time = 1.080 seconds 00:05:12.577 EAL: request: mp_malloc_sync 00:05:12.577 EAL: No shared files mode enabled, IPC is disabled 00:05:12.577 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.577 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.577 EAL: request: mp_malloc_sync 00:05:12.577 EAL: No shared files mode enabled, IPC is disabled 00:05:12.577 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.577 EAL: No shared files mode enabled, IPC is disabled 00:05:12.577 EAL: No shared files mode enabled, IPC is disabled 00:05:12.577 EAL: No shared files mode enabled, IPC is disabled 00:05:12.577 ************************************ 00:05:12.577 END TEST env_vtophys 00:05:12.577 ************************************ 00:05:12.577 00:05:12.577 real 0m1.320s 00:05:12.577 user 0m0.515s 00:05:12.577 sys 0m0.665s 00:05:12.577 18:18:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.577 18:18:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.577 18:18:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.577 18:18:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.577 18:18:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.577 18:18:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.577 ************************************ 00:05:12.577 START TEST env_pci 00:05:12.577 ************************************ 00:05:12.577 18:18:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.577 00:05:12.577 00:05:12.577 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.577 http://cunit.sourceforge.net/ 00:05:12.577 00:05:12.577 00:05:12.577 Suite: pci 00:05:12.577 Test: pci_hook ...[2024-11-29 18:18:32.431336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69355 has claimed it 00:05:12.577 passed 00:05:12.577 00:05:12.577 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.577 suites 1 1 n/a 0 0 00:05:12.577 tests 1 1 1 0 0 00:05:12.577 asserts 25 25 25 0 n/a 00:05:12.577 00:05:12.577 Elapsed time = 0.003 seconds 00:05:12.577 EAL: Cannot find device (10000:00:01.0) 00:05:12.577 EAL: Failed to attach device on primary process 00:05:12.577 00:05:12.577 real 0m0.049s 00:05:12.577 user 0m0.015s 00:05:12.577 sys 0m0.032s 00:05:12.577 ************************************ 00:05:12.577 END TEST env_pci 00:05:12.577 ************************************ 00:05:12.577 18:18:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.577 18:18:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.839 18:18:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.839 18:18:32 env -- env/env.sh@15 -- # uname 00:05:12.839 18:18:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.839 18:18:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.839 18:18:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.839 18:18:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:12.839 18:18:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.839 18:18:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.839 ************************************ 00:05:12.839 START TEST env_dpdk_post_init 00:05:12.839 ************************************ 00:05:12.839 18:18:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.839 EAL: Detected CPU lcores: 10 00:05:12.839 EAL: Detected NUMA nodes: 1 00:05:12.839 EAL: Detected shared linkage of DPDK 00:05:12.839 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.839 EAL: Selected IOVA mode 'PA' 00:05:12.839 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.839 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.839 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.839 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:12.839 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:12.839 Starting DPDK initialization... 00:05:12.839 Starting SPDK post initialization... 00:05:12.839 SPDK NVMe probe 00:05:12.839 Attaching to 0000:00:10.0 00:05:12.839 Attaching to 0000:00:11.0 00:05:12.839 Attaching to 0000:00:12.0 00:05:12.839 Attaching to 0000:00:13.0 00:05:12.839 Attached to 0000:00:10.0 00:05:12.839 Attached to 0000:00:11.0 00:05:12.839 Attached to 0000:00:13.0 00:05:12.839 Attached to 0000:00:12.0 00:05:12.839 Cleaning up... 00:05:13.100 ************************************ 00:05:13.100 END TEST env_dpdk_post_init 00:05:13.100 ************************************ 00:05:13.100 00:05:13.100 real 0m0.224s 00:05:13.100 user 0m0.069s 00:05:13.100 sys 0m0.057s 00:05:13.100 18:18:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.100 18:18:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 18:18:32 env -- env/env.sh@26 -- # uname 00:05:13.100 18:18:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.100 18:18:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.100 18:18:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.100 18:18:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.100 18:18:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 ************************************ 00:05:13.100 START TEST env_mem_callbacks 00:05:13.100 ************************************ 00:05:13.100 18:18:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.100 EAL: Detected CPU lcores: 10 00:05:13.100 EAL: Detected NUMA nodes: 1 00:05:13.100 EAL: Detected shared linkage of DPDK 00:05:13.100 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.100 EAL: Selected IOVA mode 'PA' 00:05:13.100 00:05:13.100 00:05:13.100 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.100 http://cunit.sourceforge.net/ 00:05:13.100 00:05:13.100 00:05:13.100 Suite: memory 00:05:13.100 Test: test ... 00:05:13.100 register 0x200000200000 2097152 00:05:13.100 malloc 3145728 00:05:13.100 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.100 register 0x200000400000 4194304 00:05:13.100 buf 0x200000500000 len 3145728 PASSED 00:05:13.100 malloc 64 00:05:13.100 buf 0x2000004fff40 len 64 PASSED 00:05:13.100 malloc 4194304 00:05:13.100 register 0x200000800000 6291456 00:05:13.100 buf 0x200000a00000 len 4194304 PASSED 00:05:13.100 free 0x200000500000 3145728 00:05:13.100 free 0x2000004fff40 64 00:05:13.100 unregister 0x200000400000 4194304 PASSED 00:05:13.100 free 0x200000a00000 4194304 00:05:13.100 unregister 0x200000800000 6291456 PASSED 00:05:13.100 malloc 8388608 00:05:13.100 register 0x200000400000 10485760 00:05:13.100 buf 0x200000600000 len 8388608 PASSED 00:05:13.100 free 0x200000600000 8388608 00:05:13.100 unregister 0x200000400000 10485760 PASSED 00:05:13.100 passed 00:05:13.100 00:05:13.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.100 suites 1 1 n/a 0 0 00:05:13.100 tests 1 1 1 0 0 00:05:13.100 asserts 15 15 15 0 n/a 00:05:13.100 00:05:13.100 Elapsed time = 0.006 seconds 00:05:13.100 00:05:13.100 real 0m0.165s 00:05:13.100 user 0m0.024s 00:05:13.100 sys 0m0.039s 00:05:13.100 18:18:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.100 18:18:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 ************************************ 00:05:13.100 END TEST env_mem_callbacks 00:05:13.100 ************************************ 00:05:13.100 00:05:13.100 real 0m2.411s 00:05:13.100 user 0m1.021s 00:05:13.100 sys 0m1.002s 00:05:13.100 18:18:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.100 18:18:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 ************************************ 00:05:13.100 END TEST env 00:05:13.100 ************************************ 00:05:13.362 18:18:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.362 18:18:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.362 18:18:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.362 18:18:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.362 ************************************ 00:05:13.362 START TEST rpc 00:05:13.362 ************************************ 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.362 * Looking for test storage... 00:05:13.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.362 18:18:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.362 18:18:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.362 18:18:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.362 18:18:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.362 18:18:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.362 18:18:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.362 18:18:33 rpc -- scripts/common.sh@345 -- # : 1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.362 18:18:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.362 18:18:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.362 18:18:33 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.362 18:18:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.362 18:18:33 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.362 18:18:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.362 18:18:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.362 18:18:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.362 18:18:33 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 --rc geninfo_unexecuted_blocks=1 00:05:13.362 00:05:13.362 ' 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 --rc geninfo_unexecuted_blocks=1 00:05:13.362 00:05:13.362 ' 00:05:13.362 18:18:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.363 --rc geninfo_unexecuted_blocks=1 00:05:13.363 00:05:13.363 ' 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.363 --rc genhtml_branch_coverage=1 00:05:13.363 --rc genhtml_function_coverage=1 00:05:13.363 --rc genhtml_legend=1 00:05:13.363 --rc geninfo_all_blocks=1 00:05:13.363 --rc geninfo_unexecuted_blocks=1 00:05:13.363 00:05:13.363 ' 00:05:13.363 18:18:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69482 00:05:13.363 18:18:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.363 18:18:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69482 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 69482 ']' 00:05:13.363 18:18:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.363 18:18:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.363 [2024-11-29 18:18:33.234321] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:13.363 [2024-11-29 18:18:33.234575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69482 ] 00:05:13.624 [2024-11-29 18:18:33.388225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.624 [2024-11-29 18:18:33.407496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.624 [2024-11-29 18:18:33.407683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69482' to capture a snapshot of events at runtime. 00:05:13.624 [2024-11-29 18:18:33.407752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.624 [2024-11-29 18:18:33.407783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.624 [2024-11-29 18:18:33.407804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69482 for offline analysis/debug. 00:05:13.624 [2024-11-29 18:18:33.408146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.195 18:18:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.195 18:18:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.195 18:18:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.195 18:18:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.195 18:18:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.195 18:18:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.195 18:18:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.195 18:18:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.195 18:18:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.195 ************************************ 00:05:14.195 START TEST rpc_integrity 00:05:14.195 ************************************ 00:05:14.195 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.195 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.195 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.195 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.456 { 00:05:14.456 "name": "Malloc0", 00:05:14.456 "aliases": [ 00:05:14.456 "ff12e4a7-cf1f-471f-995c-6dd3878bf049" 00:05:14.456 ], 00:05:14.456 "product_name": "Malloc disk", 00:05:14.456 "block_size": 512, 00:05:14.456 "num_blocks": 16384, 00:05:14.456 "uuid": "ff12e4a7-cf1f-471f-995c-6dd3878bf049", 00:05:14.456 "assigned_rate_limits": { 00:05:14.456 "rw_ios_per_sec": 0, 00:05:14.456 "rw_mbytes_per_sec": 0, 00:05:14.456 "r_mbytes_per_sec": 0, 00:05:14.456 "w_mbytes_per_sec": 0 00:05:14.456 }, 00:05:14.456 "claimed": false, 00:05:14.456 "zoned": false, 00:05:14.456 "supported_io_types": { 00:05:14.456 "read": true, 00:05:14.456 "write": true, 00:05:14.456 "unmap": true, 00:05:14.456 "flush": true, 00:05:14.456 "reset": true, 00:05:14.456 "nvme_admin": false, 00:05:14.456 "nvme_io": false, 00:05:14.456 "nvme_io_md": false, 00:05:14.456 "write_zeroes": true, 00:05:14.456 "zcopy": true, 00:05:14.456 "get_zone_info": false, 00:05:14.456 "zone_management": false, 00:05:14.456 "zone_append": false, 00:05:14.456 "compare": false, 00:05:14.456 "compare_and_write": false, 00:05:14.456 "abort": true, 00:05:14.456 "seek_hole": false, 00:05:14.456 "seek_data": false, 00:05:14.456 "copy": true, 00:05:14.456 "nvme_iov_md": false 00:05:14.456 }, 00:05:14.456 "memory_domains": [ 00:05:14.456 { 00:05:14.456 "dma_device_id": "system", 00:05:14.456 "dma_device_type": 1 00:05:14.456 }, 00:05:14.456 { 00:05:14.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.456 "dma_device_type": 2 00:05:14.456 } 00:05:14.456 ], 00:05:14.456 "driver_specific": {} 00:05:14.456 } 00:05:14.456 ]' 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 [2024-11-29 18:18:34.202521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.456 [2024-11-29 18:18:34.202664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.456 [2024-11-29 18:18:34.202700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:14.456 [2024-11-29 18:18:34.202710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.456 [2024-11-29 18:18:34.204891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.456 [2024-11-29 18:18:34.204926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.456 Passthru0 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.456 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.456 { 00:05:14.456 "name": "Malloc0", 00:05:14.456 "aliases": [ 00:05:14.456 "ff12e4a7-cf1f-471f-995c-6dd3878bf049" 00:05:14.456 ], 00:05:14.456 "product_name": "Malloc disk", 00:05:14.456 "block_size": 512, 00:05:14.456 "num_blocks": 16384, 00:05:14.456 "uuid": "ff12e4a7-cf1f-471f-995c-6dd3878bf049", 00:05:14.456 "assigned_rate_limits": { 00:05:14.456 "rw_ios_per_sec": 0, 00:05:14.456 "rw_mbytes_per_sec": 0, 00:05:14.456 "r_mbytes_per_sec": 0, 00:05:14.456 "w_mbytes_per_sec": 0 00:05:14.456 }, 00:05:14.456 "claimed": true, 00:05:14.456 "claim_type": "exclusive_write", 00:05:14.456 "zoned": false, 00:05:14.456 "supported_io_types": { 00:05:14.456 "read": true, 00:05:14.456 "write": true, 00:05:14.456 "unmap": true, 00:05:14.456 "flush": true, 00:05:14.456 "reset": true, 00:05:14.456 "nvme_admin": false, 00:05:14.456 "nvme_io": false, 00:05:14.456 "nvme_io_md": false, 00:05:14.456 "write_zeroes": true, 00:05:14.456 "zcopy": true, 00:05:14.456 "get_zone_info": false, 00:05:14.456 "zone_management": false, 00:05:14.456 "zone_append": false, 00:05:14.456 "compare": false, 00:05:14.456 "compare_and_write": false, 00:05:14.456 "abort": true, 00:05:14.456 "seek_hole": false, 00:05:14.456 "seek_data": false, 00:05:14.456 "copy": true, 00:05:14.456 "nvme_iov_md": false 00:05:14.456 }, 00:05:14.456 "memory_domains": [ 00:05:14.456 { 00:05:14.456 "dma_device_id": "system", 00:05:14.456 "dma_device_type": 1 00:05:14.456 }, 00:05:14.456 { 00:05:14.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.456 "dma_device_type": 2 00:05:14.456 } 00:05:14.456 ], 00:05:14.456 "driver_specific": {} 00:05:14.456 }, 00:05:14.456 { 00:05:14.456 "name": "Passthru0", 00:05:14.456 "aliases": [ 00:05:14.456 "26dbfd29-dbf3-528d-8698-90cabc0e98df" 00:05:14.456 ], 00:05:14.456 "product_name": "passthru", 00:05:14.456 "block_size": 512, 00:05:14.456 "num_blocks": 16384, 00:05:14.456 "uuid": "26dbfd29-dbf3-528d-8698-90cabc0e98df", 00:05:14.456 "assigned_rate_limits": { 00:05:14.456 "rw_ios_per_sec": 0, 00:05:14.456 "rw_mbytes_per_sec": 0, 00:05:14.456 "r_mbytes_per_sec": 0, 00:05:14.456 "w_mbytes_per_sec": 0 00:05:14.456 }, 00:05:14.456 "claimed": false, 00:05:14.456 "zoned": false, 00:05:14.456 "supported_io_types": { 00:05:14.456 "read": true, 00:05:14.456 "write": true, 00:05:14.456 "unmap": true, 00:05:14.456 "flush": true, 00:05:14.456 "reset": true, 00:05:14.456 "nvme_admin": false, 00:05:14.456 "nvme_io": false, 00:05:14.457 "nvme_io_md": false, 00:05:14.457 "write_zeroes": true, 00:05:14.457 "zcopy": true, 00:05:14.457 "get_zone_info": false, 00:05:14.457 "zone_management": false, 00:05:14.457 "zone_append": false, 00:05:14.457 "compare": false, 00:05:14.457 "compare_and_write": false, 00:05:14.457 "abort": true, 00:05:14.457 "seek_hole": false, 00:05:14.457 "seek_data": false, 00:05:14.457 "copy": true, 00:05:14.457 "nvme_iov_md": false 00:05:14.457 }, 00:05:14.457 "memory_domains": [ 00:05:14.457 { 00:05:14.457 "dma_device_id": "system", 00:05:14.457 "dma_device_type": 1 00:05:14.457 }, 00:05:14.457 { 00:05:14.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.457 "dma_device_type": 2 00:05:14.457 } 00:05:14.457 ], 00:05:14.457 "driver_specific": { 00:05:14.457 "passthru": { 00:05:14.457 "name": "Passthru0", 00:05:14.457 "base_bdev_name": "Malloc0" 00:05:14.457 } 00:05:14.457 } 00:05:14.457 } 00:05:14.457 ]' 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.457 ************************************ 00:05:14.457 END TEST rpc_integrity 00:05:14.457 ************************************ 00:05:14.457 18:18:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.457 00:05:14.457 real 0m0.245s 00:05:14.457 user 0m0.139s 00:05:14.457 sys 0m0.034s 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.457 18:18:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 18:18:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.718 18:18:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.718 18:18:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.718 18:18:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 ************************************ 00:05:14.718 START TEST rpc_plugins 00:05:14.718 ************************************ 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:14.718 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.718 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.718 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.718 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.718 { 00:05:14.718 "name": "Malloc1", 00:05:14.718 "aliases": [ 00:05:14.718 "354faf00-a71d-462c-92a4-2723b131ae2f" 00:05:14.718 ], 00:05:14.718 "product_name": "Malloc disk", 00:05:14.718 "block_size": 4096, 00:05:14.718 "num_blocks": 256, 00:05:14.718 "uuid": "354faf00-a71d-462c-92a4-2723b131ae2f", 00:05:14.718 "assigned_rate_limits": { 00:05:14.718 "rw_ios_per_sec": 0, 00:05:14.718 "rw_mbytes_per_sec": 0, 00:05:14.718 "r_mbytes_per_sec": 0, 00:05:14.718 "w_mbytes_per_sec": 0 00:05:14.718 }, 00:05:14.718 "claimed": false, 00:05:14.718 "zoned": false, 00:05:14.718 "supported_io_types": { 00:05:14.718 "read": true, 00:05:14.718 "write": true, 00:05:14.718 "unmap": true, 00:05:14.718 "flush": true, 00:05:14.718 "reset": true, 00:05:14.718 "nvme_admin": false, 00:05:14.718 "nvme_io": false, 00:05:14.718 "nvme_io_md": false, 00:05:14.718 "write_zeroes": true, 00:05:14.718 "zcopy": true, 00:05:14.718 "get_zone_info": false, 00:05:14.718 "zone_management": false, 00:05:14.718 "zone_append": false, 00:05:14.718 "compare": false, 00:05:14.718 "compare_and_write": false, 00:05:14.718 "abort": true, 00:05:14.718 "seek_hole": false, 00:05:14.718 "seek_data": false, 00:05:14.718 "copy": true, 00:05:14.718 "nvme_iov_md": false 00:05:14.718 }, 00:05:14.718 "memory_domains": [ 00:05:14.718 { 00:05:14.718 "dma_device_id": "system", 00:05:14.718 "dma_device_type": 1 00:05:14.718 }, 00:05:14.719 { 00:05:14.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.719 "dma_device_type": 2 00:05:14.719 } 00:05:14.719 ], 00:05:14.719 "driver_specific": {} 00:05:14.719 } 00:05:14.719 ]' 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:14.719 ************************************ 00:05:14.719 END TEST rpc_plugins 00:05:14.719 ************************************ 00:05:14.719 18:18:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:14.719 00:05:14.719 real 0m0.117s 00:05:14.719 user 0m0.066s 00:05:14.719 sys 0m0.014s 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.719 18:18:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.719 18:18:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:14.719 18:18:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.719 18:18:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.719 18:18:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.719 ************************************ 00:05:14.719 START TEST rpc_trace_cmd_test 00:05:14.719 ************************************ 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:14.719 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69482", 00:05:14.719 "tpoint_group_mask": "0x8", 00:05:14.719 "iscsi_conn": { 00:05:14.719 "mask": "0x2", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "scsi": { 00:05:14.719 "mask": "0x4", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "bdev": { 00:05:14.719 "mask": "0x8", 00:05:14.719 "tpoint_mask": "0xffffffffffffffff" 00:05:14.719 }, 00:05:14.719 "nvmf_rdma": { 00:05:14.719 "mask": "0x10", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "nvmf_tcp": { 00:05:14.719 "mask": "0x20", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "ftl": { 00:05:14.719 "mask": "0x40", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "blobfs": { 00:05:14.719 "mask": "0x80", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "dsa": { 00:05:14.719 "mask": "0x200", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "thread": { 00:05:14.719 "mask": "0x400", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "nvme_pcie": { 00:05:14.719 "mask": "0x800", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "iaa": { 00:05:14.719 "mask": "0x1000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "nvme_tcp": { 00:05:14.719 "mask": "0x2000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "bdev_nvme": { 00:05:14.719 "mask": "0x4000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "sock": { 00:05:14.719 "mask": "0x8000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "blob": { 00:05:14.719 "mask": "0x10000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "bdev_raid": { 00:05:14.719 "mask": "0x20000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 }, 00:05:14.719 "scheduler": { 00:05:14.719 "mask": "0x40000", 00:05:14.719 "tpoint_mask": "0x0" 00:05:14.719 } 00:05:14.719 }' 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:14.719 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.981 00:05:14.981 real 0m0.176s 00:05:14.981 user 0m0.145s 00:05:14.981 sys 0m0.022s 00:05:14.981 ************************************ 00:05:14.981 END TEST rpc_trace_cmd_test 00:05:14.981 ************************************ 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 18:18:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.981 18:18:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.981 18:18:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.981 18:18:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.981 18:18:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.981 18:18:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 ************************************ 00:05:14.981 START TEST rpc_daemon_integrity 00:05:14.981 ************************************ 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.981 { 00:05:14.981 "name": "Malloc2", 00:05:14.981 "aliases": [ 00:05:14.981 "81b96118-9b2c-4cc3-8a8d-f623bb0a0c20" 00:05:14.981 ], 00:05:14.981 "product_name": "Malloc disk", 00:05:14.981 "block_size": 512, 00:05:14.981 "num_blocks": 16384, 00:05:14.981 "uuid": "81b96118-9b2c-4cc3-8a8d-f623bb0a0c20", 00:05:14.981 "assigned_rate_limits": { 00:05:14.981 "rw_ios_per_sec": 0, 00:05:14.981 "rw_mbytes_per_sec": 0, 00:05:14.981 "r_mbytes_per_sec": 0, 00:05:14.981 "w_mbytes_per_sec": 0 00:05:14.981 }, 00:05:14.981 "claimed": false, 00:05:14.981 "zoned": false, 00:05:14.981 "supported_io_types": { 00:05:14.981 "read": true, 00:05:14.981 "write": true, 00:05:14.981 "unmap": true, 00:05:14.981 "flush": true, 00:05:14.981 "reset": true, 00:05:14.981 "nvme_admin": false, 00:05:14.981 "nvme_io": false, 00:05:14.981 "nvme_io_md": false, 00:05:14.981 "write_zeroes": true, 00:05:14.981 "zcopy": true, 00:05:14.981 "get_zone_info": false, 00:05:14.981 "zone_management": false, 00:05:14.981 "zone_append": false, 00:05:14.981 "compare": false, 00:05:14.981 "compare_and_write": false, 00:05:14.981 "abort": true, 00:05:14.981 "seek_hole": false, 00:05:14.981 "seek_data": false, 00:05:14.981 "copy": true, 00:05:14.981 "nvme_iov_md": false 00:05:14.981 }, 00:05:14.981 "memory_domains": [ 00:05:14.981 { 00:05:14.981 "dma_device_id": "system", 00:05:14.981 "dma_device_type": 1 00:05:14.981 }, 00:05:14.981 { 00:05:14.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.981 "dma_device_type": 2 00:05:14.981 } 00:05:14.981 ], 00:05:14.981 "driver_specific": {} 00:05:14.981 } 00:05:14.981 ]' 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.981 [2024-11-29 18:18:34.870827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.981 [2024-11-29 18:18:34.870872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.981 [2024-11-29 18:18:34.870897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:14.981 [2024-11-29 18:18:34.870906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.981 [2024-11-29 18:18:34.873008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.981 [2024-11-29 18:18:34.873039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.981 Passthru0 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.981 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.243 { 00:05:15.243 "name": "Malloc2", 00:05:15.243 "aliases": [ 00:05:15.243 "81b96118-9b2c-4cc3-8a8d-f623bb0a0c20" 00:05:15.243 ], 00:05:15.243 "product_name": "Malloc disk", 00:05:15.243 "block_size": 512, 00:05:15.243 "num_blocks": 16384, 00:05:15.243 "uuid": "81b96118-9b2c-4cc3-8a8d-f623bb0a0c20", 00:05:15.243 "assigned_rate_limits": { 00:05:15.243 "rw_ios_per_sec": 0, 00:05:15.243 "rw_mbytes_per_sec": 0, 00:05:15.243 "r_mbytes_per_sec": 0, 00:05:15.243 "w_mbytes_per_sec": 0 00:05:15.243 }, 00:05:15.243 "claimed": true, 00:05:15.243 "claim_type": "exclusive_write", 00:05:15.243 "zoned": false, 00:05:15.243 "supported_io_types": { 00:05:15.243 "read": true, 00:05:15.243 "write": true, 00:05:15.243 "unmap": true, 00:05:15.243 "flush": true, 00:05:15.243 "reset": true, 00:05:15.243 "nvme_admin": false, 00:05:15.243 "nvme_io": false, 00:05:15.243 "nvme_io_md": false, 00:05:15.243 "write_zeroes": true, 00:05:15.243 "zcopy": true, 00:05:15.243 "get_zone_info": false, 00:05:15.243 "zone_management": false, 00:05:15.243 "zone_append": false, 00:05:15.243 "compare": false, 00:05:15.243 "compare_and_write": false, 00:05:15.243 "abort": true, 00:05:15.243 "seek_hole": false, 00:05:15.243 "seek_data": false, 00:05:15.243 "copy": true, 00:05:15.243 "nvme_iov_md": false 00:05:15.243 }, 00:05:15.243 "memory_domains": [ 00:05:15.243 { 00:05:15.243 "dma_device_id": "system", 00:05:15.243 "dma_device_type": 1 00:05:15.243 }, 00:05:15.243 { 00:05:15.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.243 "dma_device_type": 2 00:05:15.243 } 00:05:15.243 ], 00:05:15.243 "driver_specific": {} 00:05:15.243 }, 00:05:15.243 { 00:05:15.243 "name": "Passthru0", 00:05:15.243 "aliases": [ 00:05:15.243 "a1041d02-caf0-59a2-901a-1344a52afe33" 00:05:15.243 ], 00:05:15.243 "product_name": "passthru", 00:05:15.243 "block_size": 512, 00:05:15.243 "num_blocks": 16384, 00:05:15.243 "uuid": "a1041d02-caf0-59a2-901a-1344a52afe33", 00:05:15.243 "assigned_rate_limits": { 00:05:15.243 "rw_ios_per_sec": 0, 00:05:15.243 "rw_mbytes_per_sec": 0, 00:05:15.243 "r_mbytes_per_sec": 0, 00:05:15.243 "w_mbytes_per_sec": 0 00:05:15.243 }, 00:05:15.243 "claimed": false, 00:05:15.243 "zoned": false, 00:05:15.243 "supported_io_types": { 00:05:15.243 "read": true, 00:05:15.243 "write": true, 00:05:15.243 "unmap": true, 00:05:15.243 "flush": true, 00:05:15.243 "reset": true, 00:05:15.243 "nvme_admin": false, 00:05:15.243 "nvme_io": false, 00:05:15.243 "nvme_io_md": false, 00:05:15.243 "write_zeroes": true, 00:05:15.243 "zcopy": true, 00:05:15.243 "get_zone_info": false, 00:05:15.243 "zone_management": false, 00:05:15.243 "zone_append": false, 00:05:15.243 "compare": false, 00:05:15.243 "compare_and_write": false, 00:05:15.243 "abort": true, 00:05:15.243 "seek_hole": false, 00:05:15.243 "seek_data": false, 00:05:15.243 "copy": true, 00:05:15.243 "nvme_iov_md": false 00:05:15.243 }, 00:05:15.243 "memory_domains": [ 00:05:15.243 { 00:05:15.243 "dma_device_id": "system", 00:05:15.243 "dma_device_type": 1 00:05:15.243 }, 00:05:15.243 { 00:05:15.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.243 "dma_device_type": 2 00:05:15.243 } 00:05:15.243 ], 00:05:15.243 "driver_specific": { 00:05:15.243 "passthru": { 00:05:15.243 "name": "Passthru0", 00:05:15.243 "base_bdev_name": "Malloc2" 00:05:15.243 } 00:05:15.243 } 00:05:15.243 } 00:05:15.243 ]' 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.243 ************************************ 00:05:15.243 END TEST rpc_daemon_integrity 00:05:15.243 ************************************ 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.243 00:05:15.243 real 0m0.212s 00:05:15.243 user 0m0.122s 00:05:15.243 sys 0m0.035s 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.243 18:18:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 18:18:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.243 18:18:35 rpc -- rpc/rpc.sh@84 -- # killprocess 69482 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 69482 ']' 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@958 -- # kill -0 69482 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69482 00:05:15.243 killing process with pid 69482 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69482' 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@973 -- # kill 69482 00:05:15.243 18:18:35 rpc -- common/autotest_common.sh@978 -- # wait 69482 00:05:15.508 ************************************ 00:05:15.508 00:05:15.508 real 0m2.252s 00:05:15.508 user 0m2.687s 00:05:15.508 sys 0m0.594s 00:05:15.508 18:18:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.508 18:18:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.508 END TEST rpc 00:05:15.508 ************************************ 00:05:15.508 18:18:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.508 18:18:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.508 18:18:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.508 18:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:15.508 ************************************ 00:05:15.508 START TEST skip_rpc 00:05:15.508 ************************************ 00:05:15.508 18:18:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.508 * Looking for test storage... 00:05:15.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.508 18:18:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.508 18:18:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.508 18:18:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.768 18:18:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.768 18:18:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.769 18:18:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.769 --rc genhtml_branch_coverage=1 00:05:15.769 --rc genhtml_function_coverage=1 00:05:15.769 --rc genhtml_legend=1 00:05:15.769 --rc geninfo_all_blocks=1 00:05:15.769 --rc geninfo_unexecuted_blocks=1 00:05:15.769 00:05:15.769 ' 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.769 --rc genhtml_branch_coverage=1 00:05:15.769 --rc genhtml_function_coverage=1 00:05:15.769 --rc genhtml_legend=1 00:05:15.769 --rc geninfo_all_blocks=1 00:05:15.769 --rc geninfo_unexecuted_blocks=1 00:05:15.769 00:05:15.769 ' 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.769 --rc genhtml_branch_coverage=1 00:05:15.769 --rc genhtml_function_coverage=1 00:05:15.769 --rc genhtml_legend=1 00:05:15.769 --rc geninfo_all_blocks=1 00:05:15.769 --rc geninfo_unexecuted_blocks=1 00:05:15.769 00:05:15.769 ' 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.769 --rc genhtml_branch_coverage=1 00:05:15.769 --rc genhtml_function_coverage=1 00:05:15.769 --rc genhtml_legend=1 00:05:15.769 --rc geninfo_all_blocks=1 00:05:15.769 --rc geninfo_unexecuted_blocks=1 00:05:15.769 00:05:15.769 ' 00:05:15.769 18:18:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.769 18:18:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.769 18:18:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.769 18:18:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.769 ************************************ 00:05:15.769 START TEST skip_rpc 00:05:15.769 ************************************ 00:05:15.769 18:18:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:15.769 18:18:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69678 00:05:15.769 18:18:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.769 18:18:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.769 18:18:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.769 [2024-11-29 18:18:35.541961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:15.769 [2024-11-29 18:18:35.542081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:05:16.028 [2024-11-29 18:18:35.700614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.028 [2024-11-29 18:18:35.720501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.320 18:18:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.320 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69678 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69678 ']' 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69678 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69678 00:05:21.321 killing process with pid 69678 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69678' 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69678 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69678 00:05:21.321 ************************************ 00:05:21.321 END TEST skip_rpc 00:05:21.321 ************************************ 00:05:21.321 00:05:21.321 real 0m5.254s 00:05:21.321 user 0m4.914s 00:05:21.321 sys 0m0.242s 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.321 18:18:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.321 18:18:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.321 18:18:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.321 18:18:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.321 18:18:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.321 ************************************ 00:05:21.321 START TEST skip_rpc_with_json 00:05:21.321 ************************************ 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69760 00:05:21.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69760 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69760 ']' 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.321 18:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.321 [2024-11-29 18:18:40.833296] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:21.321 [2024-11-29 18:18:40.833406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69760 ] 00:05:21.321 [2024-11-29 18:18:40.986556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.321 [2024-11-29 18:18:41.003926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.893 [2024-11-29 18:18:41.666908] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:21.893 request: 00:05:21.893 { 00:05:21.893 "trtype": "tcp", 00:05:21.893 "method": "nvmf_get_transports", 00:05:21.893 "req_id": 1 00:05:21.893 } 00:05:21.893 Got JSON-RPC error response 00:05:21.893 response: 00:05:21.893 { 00:05:21.893 "code": -19, 00:05:21.893 "message": "No such device" 00:05:21.893 } 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.893 [2024-11-29 18:18:41.678991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.893 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.154 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.154 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.154 { 00:05:22.154 "subsystems": [ 00:05:22.154 { 00:05:22.154 "subsystem": "fsdev", 00:05:22.154 "config": [ 00:05:22.154 { 00:05:22.154 "method": "fsdev_set_opts", 00:05:22.154 "params": { 00:05:22.154 "fsdev_io_pool_size": 65535, 00:05:22.154 "fsdev_io_cache_size": 256 00:05:22.154 } 00:05:22.154 } 00:05:22.154 ] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "keyring", 00:05:22.154 "config": [] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "iobuf", 00:05:22.154 "config": [ 00:05:22.154 { 00:05:22.154 "method": "iobuf_set_options", 00:05:22.154 "params": { 00:05:22.154 "small_pool_count": 8192, 00:05:22.154 "large_pool_count": 1024, 00:05:22.154 "small_bufsize": 8192, 00:05:22.154 "large_bufsize": 135168, 00:05:22.154 "enable_numa": false 00:05:22.154 } 00:05:22.154 } 00:05:22.154 ] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "sock", 00:05:22.154 "config": [ 00:05:22.154 { 00:05:22.154 "method": "sock_set_default_impl", 00:05:22.154 "params": { 00:05:22.154 "impl_name": "posix" 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "sock_impl_set_options", 00:05:22.154 "params": { 00:05:22.154 "impl_name": "ssl", 00:05:22.154 "recv_buf_size": 4096, 00:05:22.154 "send_buf_size": 4096, 00:05:22.154 "enable_recv_pipe": true, 00:05:22.154 "enable_quickack": false, 00:05:22.154 "enable_placement_id": 0, 00:05:22.154 "enable_zerocopy_send_server": true, 00:05:22.154 "enable_zerocopy_send_client": false, 00:05:22.154 "zerocopy_threshold": 0, 00:05:22.154 "tls_version": 0, 00:05:22.154 "enable_ktls": false 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "sock_impl_set_options", 00:05:22.154 "params": { 00:05:22.154 "impl_name": "posix", 00:05:22.154 "recv_buf_size": 2097152, 00:05:22.154 "send_buf_size": 2097152, 00:05:22.154 "enable_recv_pipe": true, 00:05:22.154 "enable_quickack": false, 00:05:22.154 "enable_placement_id": 0, 00:05:22.154 "enable_zerocopy_send_server": true, 00:05:22.154 "enable_zerocopy_send_client": false, 00:05:22.154 "zerocopy_threshold": 0, 00:05:22.154 "tls_version": 0, 00:05:22.154 "enable_ktls": false 00:05:22.154 } 00:05:22.154 } 00:05:22.154 ] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "vmd", 00:05:22.154 "config": [] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "accel", 00:05:22.154 "config": [ 00:05:22.154 { 00:05:22.154 "method": "accel_set_options", 00:05:22.154 "params": { 00:05:22.154 "small_cache_size": 128, 00:05:22.154 "large_cache_size": 16, 00:05:22.154 "task_count": 2048, 00:05:22.154 "sequence_count": 2048, 00:05:22.154 "buf_count": 2048 00:05:22.154 } 00:05:22.154 } 00:05:22.154 ] 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "subsystem": "bdev", 00:05:22.154 "config": [ 00:05:22.154 { 00:05:22.154 "method": "bdev_set_options", 00:05:22.154 "params": { 00:05:22.154 "bdev_io_pool_size": 65535, 00:05:22.154 "bdev_io_cache_size": 256, 00:05:22.154 "bdev_auto_examine": true, 00:05:22.154 "iobuf_small_cache_size": 128, 00:05:22.154 "iobuf_large_cache_size": 16 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "bdev_raid_set_options", 00:05:22.154 "params": { 00:05:22.154 "process_window_size_kb": 1024, 00:05:22.154 "process_max_bandwidth_mb_sec": 0 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "bdev_iscsi_set_options", 00:05:22.154 "params": { 00:05:22.154 "timeout_sec": 30 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "bdev_nvme_set_options", 00:05:22.154 "params": { 00:05:22.154 "action_on_timeout": "none", 00:05:22.154 "timeout_us": 0, 00:05:22.154 "timeout_admin_us": 0, 00:05:22.154 "keep_alive_timeout_ms": 10000, 00:05:22.154 "arbitration_burst": 0, 00:05:22.154 "low_priority_weight": 0, 00:05:22.154 "medium_priority_weight": 0, 00:05:22.154 "high_priority_weight": 0, 00:05:22.154 "nvme_adminq_poll_period_us": 10000, 00:05:22.154 "nvme_ioq_poll_period_us": 0, 00:05:22.154 "io_queue_requests": 0, 00:05:22.154 "delay_cmd_submit": true, 00:05:22.154 "transport_retry_count": 4, 00:05:22.154 "bdev_retry_count": 3, 00:05:22.154 "transport_ack_timeout": 0, 00:05:22.154 "ctrlr_loss_timeout_sec": 0, 00:05:22.154 "reconnect_delay_sec": 0, 00:05:22.154 "fast_io_fail_timeout_sec": 0, 00:05:22.154 "disable_auto_failback": false, 00:05:22.154 "generate_uuids": false, 00:05:22.154 "transport_tos": 0, 00:05:22.154 "nvme_error_stat": false, 00:05:22.154 "rdma_srq_size": 0, 00:05:22.154 "io_path_stat": false, 00:05:22.154 "allow_accel_sequence": false, 00:05:22.154 "rdma_max_cq_size": 0, 00:05:22.154 "rdma_cm_event_timeout_ms": 0, 00:05:22.154 "dhchap_digests": [ 00:05:22.154 "sha256", 00:05:22.154 "sha384", 00:05:22.154 "sha512" 00:05:22.154 ], 00:05:22.154 "dhchap_dhgroups": [ 00:05:22.154 "null", 00:05:22.154 "ffdhe2048", 00:05:22.154 "ffdhe3072", 00:05:22.154 "ffdhe4096", 00:05:22.154 "ffdhe6144", 00:05:22.154 "ffdhe8192" 00:05:22.154 ] 00:05:22.154 } 00:05:22.154 }, 00:05:22.154 { 00:05:22.154 "method": "bdev_nvme_set_hotplug", 00:05:22.154 "params": { 00:05:22.154 "period_us": 100000, 00:05:22.154 "enable": false 00:05:22.154 } 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "method": "bdev_wait_for_examine" 00:05:22.155 } 00:05:22.155 ] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "scsi", 00:05:22.155 "config": null 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "scheduler", 00:05:22.155 "config": [ 00:05:22.155 { 00:05:22.155 "method": "framework_set_scheduler", 00:05:22.155 "params": { 00:05:22.155 "name": "static" 00:05:22.155 } 00:05:22.155 } 00:05:22.155 ] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "vhost_scsi", 00:05:22.155 "config": [] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "vhost_blk", 00:05:22.155 "config": [] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "ublk", 00:05:22.155 "config": [] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "nbd", 00:05:22.155 "config": [] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "nvmf", 00:05:22.155 "config": [ 00:05:22.155 { 00:05:22.155 "method": "nvmf_set_config", 00:05:22.155 "params": { 00:05:22.155 "discovery_filter": "match_any", 00:05:22.155 "admin_cmd_passthru": { 00:05:22.155 "identify_ctrlr": false 00:05:22.155 }, 00:05:22.155 "dhchap_digests": [ 00:05:22.155 "sha256", 00:05:22.155 "sha384", 00:05:22.155 "sha512" 00:05:22.155 ], 00:05:22.155 "dhchap_dhgroups": [ 00:05:22.155 "null", 00:05:22.155 "ffdhe2048", 00:05:22.155 "ffdhe3072", 00:05:22.155 "ffdhe4096", 00:05:22.155 "ffdhe6144", 00:05:22.155 "ffdhe8192" 00:05:22.155 ] 00:05:22.155 } 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "method": "nvmf_set_max_subsystems", 00:05:22.155 "params": { 00:05:22.155 "max_subsystems": 1024 00:05:22.155 } 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "method": "nvmf_set_crdt", 00:05:22.155 "params": { 00:05:22.155 "crdt1": 0, 00:05:22.155 "crdt2": 0, 00:05:22.155 "crdt3": 0 00:05:22.155 } 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "method": "nvmf_create_transport", 00:05:22.155 "params": { 00:05:22.155 "trtype": "TCP", 00:05:22.155 "max_queue_depth": 128, 00:05:22.155 "max_io_qpairs_per_ctrlr": 127, 00:05:22.155 "in_capsule_data_size": 4096, 00:05:22.155 "max_io_size": 131072, 00:05:22.155 "io_unit_size": 131072, 00:05:22.155 "max_aq_depth": 128, 00:05:22.155 "num_shared_buffers": 511, 00:05:22.155 "buf_cache_size": 4294967295, 00:05:22.155 "dif_insert_or_strip": false, 00:05:22.155 "zcopy": false, 00:05:22.155 "c2h_success": true, 00:05:22.155 "sock_priority": 0, 00:05:22.155 "abort_timeout_sec": 1, 00:05:22.155 "ack_timeout": 0, 00:05:22.155 "data_wr_pool_size": 0 00:05:22.155 } 00:05:22.155 } 00:05:22.155 ] 00:05:22.155 }, 00:05:22.155 { 00:05:22.155 "subsystem": "iscsi", 00:05:22.155 "config": [ 00:05:22.155 { 00:05:22.155 "method": "iscsi_set_options", 00:05:22.155 "params": { 00:05:22.155 "node_base": "iqn.2016-06.io.spdk", 00:05:22.155 "max_sessions": 128, 00:05:22.155 "max_connections_per_session": 2, 00:05:22.155 "max_queue_depth": 64, 00:05:22.155 "default_time2wait": 2, 00:05:22.155 "default_time2retain": 20, 00:05:22.155 "first_burst_length": 8192, 00:05:22.155 "immediate_data": true, 00:05:22.155 "allow_duplicated_isid": false, 00:05:22.155 "error_recovery_level": 0, 00:05:22.155 "nop_timeout": 60, 00:05:22.155 "nop_in_interval": 30, 00:05:22.155 "disable_chap": false, 00:05:22.155 "require_chap": false, 00:05:22.155 "mutual_chap": false, 00:05:22.155 "chap_group": 0, 00:05:22.155 "max_large_datain_per_connection": 64, 00:05:22.155 "max_r2t_per_connection": 4, 00:05:22.155 "pdu_pool_size": 36864, 00:05:22.155 "immediate_data_pool_size": 16384, 00:05:22.155 "data_out_pool_size": 2048 00:05:22.155 } 00:05:22.155 } 00:05:22.155 ] 00:05:22.155 } 00:05:22.155 ] 00:05:22.155 } 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69760 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69760 ']' 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69760 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69760 00:05:22.155 killing process with pid 69760 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69760' 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69760 00:05:22.155 18:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69760 00:05:22.415 18:18:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69789 00:05:22.415 18:18:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:22.415 18:18:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69789 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69789 ']' 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69789 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69789 00:05:27.722 killing process with pid 69789 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69789' 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69789 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69789 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.722 00:05:27.722 real 0m6.588s 00:05:27.722 user 0m6.305s 00:05:27.722 sys 0m0.506s 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.722 ************************************ 00:05:27.722 END TEST skip_rpc_with_json 00:05:27.722 ************************************ 00:05:27.722 18:18:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:27.722 18:18:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.722 18:18:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.722 18:18:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.722 ************************************ 00:05:27.722 START TEST skip_rpc_with_delay 00:05:27.722 ************************************ 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.722 [2024-11-29 18:18:47.457076] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.722 00:05:27.722 real 0m0.111s 00:05:27.722 user 0m0.048s 00:05:27.722 sys 0m0.063s 00:05:27.722 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.722 ************************************ 00:05:27.723 END TEST skip_rpc_with_delay 00:05:27.723 ************************************ 00:05:27.723 18:18:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:27.723 18:18:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:27.723 18:18:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:27.723 18:18:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:27.723 18:18:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.723 18:18:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.723 18:18:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.723 ************************************ 00:05:27.723 START TEST exit_on_failed_rpc_init 00:05:27.723 ************************************ 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69900 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69900 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69900 ']' 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.723 18:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.723 [2024-11-29 18:18:47.611162] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:27.723 [2024-11-29 18:18:47.611280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69900 ] 00:05:27.984 [2024-11-29 18:18:47.769171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.984 [2024-11-29 18:18:47.787542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.557 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.818 [2024-11-29 18:18:48.511541] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:28.818 [2024-11-29 18:18:48.511790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:05:28.818 [2024-11-29 18:18:48.668644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.818 [2024-11-29 18:18:48.687724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.818 [2024-11-29 18:18:48.687942] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.818 [2024-11-29 18:18:48.687968] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.818 [2024-11-29 18:18:48.687977] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.079 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:29.079 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69900 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69900 ']' 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69900 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69900 00:05:29.080 killing process with pid 69900 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69900' 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69900 00:05:29.080 18:18:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69900 00:05:29.341 00:05:29.341 real 0m1.487s 00:05:29.341 user 0m1.623s 00:05:29.341 sys 0m0.376s 00:05:29.341 18:18:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.341 ************************************ 00:05:29.341 18:18:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.341 END TEST exit_on_failed_rpc_init 00:05:29.341 ************************************ 00:05:29.341 18:18:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.341 00:05:29.341 real 0m13.746s 00:05:29.341 user 0m13.037s 00:05:29.341 sys 0m1.344s 00:05:29.341 18:18:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.341 ************************************ 00:05:29.341 END TEST skip_rpc 00:05:29.341 ************************************ 00:05:29.341 18:18:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.341 18:18:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.341 18:18:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.341 18:18:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.341 18:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:29.341 ************************************ 00:05:29.341 START TEST rpc_client 00:05:29.341 ************************************ 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.341 * Looking for test storage... 00:05:29.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.341 18:18:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.341 --rc genhtml_branch_coverage=1 00:05:29.341 --rc genhtml_function_coverage=1 00:05:29.341 --rc genhtml_legend=1 00:05:29.341 --rc geninfo_all_blocks=1 00:05:29.341 --rc geninfo_unexecuted_blocks=1 00:05:29.341 00:05:29.341 ' 00:05:29.341 18:18:49 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.342 --rc genhtml_branch_coverage=1 00:05:29.342 --rc genhtml_function_coverage=1 00:05:29.342 --rc genhtml_legend=1 00:05:29.342 --rc geninfo_all_blocks=1 00:05:29.342 --rc geninfo_unexecuted_blocks=1 00:05:29.342 00:05:29.342 ' 00:05:29.342 18:18:49 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.342 --rc genhtml_branch_coverage=1 00:05:29.342 --rc genhtml_function_coverage=1 00:05:29.342 --rc genhtml_legend=1 00:05:29.342 --rc geninfo_all_blocks=1 00:05:29.342 --rc geninfo_unexecuted_blocks=1 00:05:29.342 00:05:29.342 ' 00:05:29.342 18:18:49 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.342 --rc genhtml_branch_coverage=1 00:05:29.342 --rc genhtml_function_coverage=1 00:05:29.342 --rc genhtml_legend=1 00:05:29.342 --rc geninfo_all_blocks=1 00:05:29.342 --rc geninfo_unexecuted_blocks=1 00:05:29.342 00:05:29.342 ' 00:05:29.342 18:18:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:29.603 OK 00:05:29.603 18:18:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.603 ************************************ 00:05:29.603 END TEST rpc_client 00:05:29.603 ************************************ 00:05:29.603 00:05:29.603 real 0m0.187s 00:05:29.603 user 0m0.110s 00:05:29.603 sys 0m0.085s 00:05:29.603 18:18:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.603 18:18:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.603 18:18:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.603 18:18:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.603 18:18:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.603 18:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:29.603 ************************************ 00:05:29.603 START TEST json_config 00:05:29.603 ************************************ 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.603 18:18:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.603 18:18:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.603 18:18:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.603 18:18:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.603 18:18:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.603 18:18:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:29.603 18:18:49 json_config -- scripts/common.sh@345 -- # : 1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.603 18:18:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.603 18:18:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@353 -- # local d=1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.603 18:18:49 json_config -- scripts/common.sh@355 -- # echo 1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.603 18:18:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@353 -- # local d=2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.603 18:18:49 json_config -- scripts/common.sh@355 -- # echo 2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.603 18:18:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.603 18:18:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.603 18:18:49 json_config -- scripts/common.sh@368 -- # return 0 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.603 18:18:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.604 --rc genhtml_branch_coverage=1 00:05:29.604 --rc genhtml_function_coverage=1 00:05:29.604 --rc genhtml_legend=1 00:05:29.604 --rc geninfo_all_blocks=1 00:05:29.604 --rc geninfo_unexecuted_blocks=1 00:05:29.604 00:05:29.604 ' 00:05:29.604 18:18:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.604 --rc genhtml_branch_coverage=1 00:05:29.604 --rc genhtml_function_coverage=1 00:05:29.604 --rc genhtml_legend=1 00:05:29.604 --rc geninfo_all_blocks=1 00:05:29.604 --rc geninfo_unexecuted_blocks=1 00:05:29.604 00:05:29.604 ' 00:05:29.604 18:18:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.604 --rc genhtml_branch_coverage=1 00:05:29.604 --rc genhtml_function_coverage=1 00:05:29.604 --rc genhtml_legend=1 00:05:29.604 --rc geninfo_all_blocks=1 00:05:29.604 --rc geninfo_unexecuted_blocks=1 00:05:29.604 00:05:29.604 ' 00:05:29.604 18:18:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.604 --rc genhtml_branch_coverage=1 00:05:29.604 --rc genhtml_function_coverage=1 00:05:29.604 --rc genhtml_legend=1 00:05:29.604 --rc geninfo_all_blocks=1 00:05:29.604 --rc geninfo_unexecuted_blocks=1 00:05:29.604 00:05:29.604 ' 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.604 18:18:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.604 18:18:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.604 18:18:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.604 18:18:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.604 18:18:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.604 18:18:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.604 18:18:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.604 18:18:49 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.604 18:18:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@51 -- # : 0 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.604 18:18:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:29.604 WARNING: No tests are enabled so not running JSON configuration tests 00:05:29.604 18:18:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:29.604 00:05:29.604 real 0m0.134s 00:05:29.604 user 0m0.091s 00:05:29.604 sys 0m0.044s 00:05:29.604 18:18:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.604 18:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.604 ************************************ 00:05:29.604 END TEST json_config 00:05:29.604 ************************************ 00:05:29.604 18:18:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.604 18:18:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.604 18:18:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.604 18:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:29.604 ************************************ 00:05:29.604 START TEST json_config_extra_key 00:05:29.604 ************************************ 00:05:29.604 18:18:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.866 18:18:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.866 --rc genhtml_branch_coverage=1 00:05:29.866 --rc genhtml_function_coverage=1 00:05:29.866 --rc genhtml_legend=1 00:05:29.866 --rc geninfo_all_blocks=1 00:05:29.866 --rc geninfo_unexecuted_blocks=1 00:05:29.866 00:05:29.866 ' 00:05:29.866 18:18:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.866 --rc genhtml_branch_coverage=1 00:05:29.866 --rc genhtml_function_coverage=1 00:05:29.866 --rc genhtml_legend=1 00:05:29.866 --rc geninfo_all_blocks=1 00:05:29.867 --rc geninfo_unexecuted_blocks=1 00:05:29.867 00:05:29.867 ' 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.867 --rc genhtml_branch_coverage=1 00:05:29.867 --rc genhtml_function_coverage=1 00:05:29.867 --rc genhtml_legend=1 00:05:29.867 --rc geninfo_all_blocks=1 00:05:29.867 --rc geninfo_unexecuted_blocks=1 00:05:29.867 00:05:29.867 ' 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.867 --rc genhtml_branch_coverage=1 00:05:29.867 --rc genhtml_function_coverage=1 00:05:29.867 --rc genhtml_legend=1 00:05:29.867 --rc geninfo_all_blocks=1 00:05:29.867 --rc geninfo_unexecuted_blocks=1 00:05:29.867 00:05:29.867 ' 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=818795fc-dc27-44ec-8ba6-9a6f5db0f26d 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.867 18:18:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.867 18:18:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.867 18:18:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.867 18:18:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.867 18:18:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.867 18:18:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.867 18:18:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.867 18:18:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:29.867 18:18:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.867 18:18:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.867 INFO: launching applications... 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:29.867 18:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70095 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.867 Waiting for target to run... 00:05:29.867 18:18:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70095 /var/tmp/spdk_tgt.sock 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70095 ']' 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.867 18:18:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.868 18:18:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.868 18:18:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 18:18:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.868 [2024-11-29 18:18:49.706923] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:29.868 [2024-11-29 18:18:49.707435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:05:30.128 [2024-11-29 18:18:50.016914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.128 [2024-11-29 18:18:50.029241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.700 18:18:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.700 18:18:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:30.700 00:05:30.700 18:18:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:30.700 INFO: shutting down applications... 00:05:30.700 18:18:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70095 ]] 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70095 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70095 00:05:30.700 18:18:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70095 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:31.272 SPDK target shutdown done 00:05:31.272 Success 00:05:31.272 18:18:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:31.272 18:18:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:31.272 ************************************ 00:05:31.272 END TEST json_config_extra_key 00:05:31.272 ************************************ 00:05:31.272 00:05:31.272 real 0m1.558s 00:05:31.272 user 0m1.271s 00:05:31.272 sys 0m0.335s 00:05:31.272 18:18:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.272 18:18:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.272 18:18:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.272 18:18:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.272 18:18:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.272 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.272 ************************************ 00:05:31.273 START TEST alias_rpc 00:05:31.273 ************************************ 00:05:31.273 18:18:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.273 * Looking for test storage... 00:05:31.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:31.273 18:18:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.273 18:18:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.273 18:18:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.533 18:18:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.533 --rc genhtml_branch_coverage=1 00:05:31.533 --rc genhtml_function_coverage=1 00:05:31.533 --rc genhtml_legend=1 00:05:31.533 --rc geninfo_all_blocks=1 00:05:31.533 --rc geninfo_unexecuted_blocks=1 00:05:31.533 00:05:31.533 ' 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.533 --rc genhtml_branch_coverage=1 00:05:31.533 --rc genhtml_function_coverage=1 00:05:31.533 --rc genhtml_legend=1 00:05:31.533 --rc geninfo_all_blocks=1 00:05:31.533 --rc geninfo_unexecuted_blocks=1 00:05:31.533 00:05:31.533 ' 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.533 --rc genhtml_branch_coverage=1 00:05:31.533 --rc genhtml_function_coverage=1 00:05:31.533 --rc genhtml_legend=1 00:05:31.533 --rc geninfo_all_blocks=1 00:05:31.533 --rc geninfo_unexecuted_blocks=1 00:05:31.533 00:05:31.533 ' 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.533 --rc genhtml_branch_coverage=1 00:05:31.533 --rc genhtml_function_coverage=1 00:05:31.533 --rc genhtml_legend=1 00:05:31.533 --rc geninfo_all_blocks=1 00:05:31.533 --rc geninfo_unexecuted_blocks=1 00:05:31.533 00:05:31.533 ' 00:05:31.533 18:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.533 18:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70169 00:05:31.533 18:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70169 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70169 ']' 00:05:31.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.533 18:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.533 18:18:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.533 [2024-11-29 18:18:51.299564] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:31.533 [2024-11-29 18:18:51.299839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70169 ] 00:05:31.794 [2024-11-29 18:18:51.455269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.794 [2024-11-29 18:18:51.474036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.366 18:18:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.366 18:18:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.366 18:18:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:32.627 18:18:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70169 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70169 ']' 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70169 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70169 00:05:32.627 killing process with pid 70169 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70169' 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 70169 00:05:32.627 18:18:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 70169 00:05:32.888 ************************************ 00:05:32.888 END TEST alias_rpc 00:05:32.888 ************************************ 00:05:32.888 00:05:32.888 real 0m1.540s 00:05:32.888 user 0m1.694s 00:05:32.888 sys 0m0.345s 00:05:32.888 18:18:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.888 18:18:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.888 18:18:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:32.888 18:18:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.888 18:18:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.888 18:18:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.888 18:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:32.888 ************************************ 00:05:32.888 START TEST spdkcli_tcp 00:05:32.888 ************************************ 00:05:32.888 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.888 * Looking for test storage... 00:05:32.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:32.888 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.888 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.888 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.149 18:18:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.149 --rc genhtml_branch_coverage=1 00:05:33.149 --rc genhtml_function_coverage=1 00:05:33.149 --rc genhtml_legend=1 00:05:33.149 --rc geninfo_all_blocks=1 00:05:33.149 --rc geninfo_unexecuted_blocks=1 00:05:33.149 00:05:33.149 ' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.149 --rc genhtml_branch_coverage=1 00:05:33.149 --rc genhtml_function_coverage=1 00:05:33.149 --rc genhtml_legend=1 00:05:33.149 --rc geninfo_all_blocks=1 00:05:33.149 --rc geninfo_unexecuted_blocks=1 00:05:33.149 00:05:33.149 ' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.149 --rc genhtml_branch_coverage=1 00:05:33.149 --rc genhtml_function_coverage=1 00:05:33.149 --rc genhtml_legend=1 00:05:33.149 --rc geninfo_all_blocks=1 00:05:33.149 --rc geninfo_unexecuted_blocks=1 00:05:33.149 00:05:33.149 ' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.149 --rc genhtml_branch_coverage=1 00:05:33.149 --rc genhtml_function_coverage=1 00:05:33.149 --rc genhtml_legend=1 00:05:33.149 --rc geninfo_all_blocks=1 00:05:33.149 --rc geninfo_unexecuted_blocks=1 00:05:33.149 00:05:33.149 ' 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70248 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70248 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70248 ']' 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.149 18:18:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.149 18:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.149 [2024-11-29 18:18:52.882028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:33.149 [2024-11-29 18:18:52.882289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70248 ] 00:05:33.149 [2024-11-29 18:18:53.040873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.408 [2024-11-29 18:18:53.060162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.408 [2024-11-29 18:18:53.060198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.979 18:18:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.979 18:18:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:33.979 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.979 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70265 00:05:33.979 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:34.240 [ 00:05:34.240 "bdev_malloc_delete", 00:05:34.240 "bdev_malloc_create", 00:05:34.240 "bdev_null_resize", 00:05:34.240 "bdev_null_delete", 00:05:34.240 "bdev_null_create", 00:05:34.240 "bdev_nvme_cuse_unregister", 00:05:34.240 "bdev_nvme_cuse_register", 00:05:34.240 "bdev_opal_new_user", 00:05:34.240 "bdev_opal_set_lock_state", 00:05:34.240 "bdev_opal_delete", 00:05:34.240 "bdev_opal_get_info", 00:05:34.240 "bdev_opal_create", 00:05:34.240 "bdev_nvme_opal_revert", 00:05:34.240 "bdev_nvme_opal_init", 00:05:34.240 "bdev_nvme_send_cmd", 00:05:34.240 "bdev_nvme_set_keys", 00:05:34.240 "bdev_nvme_get_path_iostat", 00:05:34.240 "bdev_nvme_get_mdns_discovery_info", 00:05:34.240 "bdev_nvme_stop_mdns_discovery", 00:05:34.240 "bdev_nvme_start_mdns_discovery", 00:05:34.240 "bdev_nvme_set_multipath_policy", 00:05:34.240 "bdev_nvme_set_preferred_path", 00:05:34.240 "bdev_nvme_get_io_paths", 00:05:34.240 "bdev_nvme_remove_error_injection", 00:05:34.240 "bdev_nvme_add_error_injection", 00:05:34.240 "bdev_nvme_get_discovery_info", 00:05:34.240 "bdev_nvme_stop_discovery", 00:05:34.240 "bdev_nvme_start_discovery", 00:05:34.240 "bdev_nvme_get_controller_health_info", 00:05:34.240 "bdev_nvme_disable_controller", 00:05:34.240 "bdev_nvme_enable_controller", 00:05:34.240 "bdev_nvme_reset_controller", 00:05:34.240 "bdev_nvme_get_transport_statistics", 00:05:34.240 "bdev_nvme_apply_firmware", 00:05:34.240 "bdev_nvme_detach_controller", 00:05:34.240 "bdev_nvme_get_controllers", 00:05:34.240 "bdev_nvme_attach_controller", 00:05:34.240 "bdev_nvme_set_hotplug", 00:05:34.240 "bdev_nvme_set_options", 00:05:34.240 "bdev_passthru_delete", 00:05:34.240 "bdev_passthru_create", 00:05:34.240 "bdev_lvol_set_parent_bdev", 00:05:34.240 "bdev_lvol_set_parent", 00:05:34.240 "bdev_lvol_check_shallow_copy", 00:05:34.240 "bdev_lvol_start_shallow_copy", 00:05:34.240 "bdev_lvol_grow_lvstore", 00:05:34.240 "bdev_lvol_get_lvols", 00:05:34.240 "bdev_lvol_get_lvstores", 00:05:34.240 "bdev_lvol_delete", 00:05:34.240 "bdev_lvol_set_read_only", 00:05:34.240 "bdev_lvol_resize", 00:05:34.240 "bdev_lvol_decouple_parent", 00:05:34.240 "bdev_lvol_inflate", 00:05:34.240 "bdev_lvol_rename", 00:05:34.240 "bdev_lvol_clone_bdev", 00:05:34.240 "bdev_lvol_clone", 00:05:34.240 "bdev_lvol_snapshot", 00:05:34.240 "bdev_lvol_create", 00:05:34.240 "bdev_lvol_delete_lvstore", 00:05:34.240 "bdev_lvol_rename_lvstore", 00:05:34.240 "bdev_lvol_create_lvstore", 00:05:34.240 "bdev_raid_set_options", 00:05:34.240 "bdev_raid_remove_base_bdev", 00:05:34.240 "bdev_raid_add_base_bdev", 00:05:34.240 "bdev_raid_delete", 00:05:34.240 "bdev_raid_create", 00:05:34.240 "bdev_raid_get_bdevs", 00:05:34.240 "bdev_error_inject_error", 00:05:34.240 "bdev_error_delete", 00:05:34.240 "bdev_error_create", 00:05:34.240 "bdev_split_delete", 00:05:34.240 "bdev_split_create", 00:05:34.240 "bdev_delay_delete", 00:05:34.240 "bdev_delay_create", 00:05:34.240 "bdev_delay_update_latency", 00:05:34.240 "bdev_zone_block_delete", 00:05:34.240 "bdev_zone_block_create", 00:05:34.240 "blobfs_create", 00:05:34.240 "blobfs_detect", 00:05:34.240 "blobfs_set_cache_size", 00:05:34.240 "bdev_xnvme_delete", 00:05:34.240 "bdev_xnvme_create", 00:05:34.240 "bdev_aio_delete", 00:05:34.240 "bdev_aio_rescan", 00:05:34.240 "bdev_aio_create", 00:05:34.240 "bdev_ftl_set_property", 00:05:34.240 "bdev_ftl_get_properties", 00:05:34.240 "bdev_ftl_get_stats", 00:05:34.240 "bdev_ftl_unmap", 00:05:34.240 "bdev_ftl_unload", 00:05:34.240 "bdev_ftl_delete", 00:05:34.240 "bdev_ftl_load", 00:05:34.240 "bdev_ftl_create", 00:05:34.240 "bdev_virtio_attach_controller", 00:05:34.240 "bdev_virtio_scsi_get_devices", 00:05:34.240 "bdev_virtio_detach_controller", 00:05:34.240 "bdev_virtio_blk_set_hotplug", 00:05:34.240 "bdev_iscsi_delete", 00:05:34.240 "bdev_iscsi_create", 00:05:34.240 "bdev_iscsi_set_options", 00:05:34.240 "accel_error_inject_error", 00:05:34.240 "ioat_scan_accel_module", 00:05:34.240 "dsa_scan_accel_module", 00:05:34.240 "iaa_scan_accel_module", 00:05:34.240 "keyring_file_remove_key", 00:05:34.240 "keyring_file_add_key", 00:05:34.240 "keyring_linux_set_options", 00:05:34.240 "fsdev_aio_delete", 00:05:34.240 "fsdev_aio_create", 00:05:34.240 "iscsi_get_histogram", 00:05:34.240 "iscsi_enable_histogram", 00:05:34.240 "iscsi_set_options", 00:05:34.240 "iscsi_get_auth_groups", 00:05:34.240 "iscsi_auth_group_remove_secret", 00:05:34.240 "iscsi_auth_group_add_secret", 00:05:34.240 "iscsi_delete_auth_group", 00:05:34.240 "iscsi_create_auth_group", 00:05:34.240 "iscsi_set_discovery_auth", 00:05:34.240 "iscsi_get_options", 00:05:34.240 "iscsi_target_node_request_logout", 00:05:34.240 "iscsi_target_node_set_redirect", 00:05:34.240 "iscsi_target_node_set_auth", 00:05:34.240 "iscsi_target_node_add_lun", 00:05:34.240 "iscsi_get_stats", 00:05:34.240 "iscsi_get_connections", 00:05:34.240 "iscsi_portal_group_set_auth", 00:05:34.240 "iscsi_start_portal_group", 00:05:34.240 "iscsi_delete_portal_group", 00:05:34.240 "iscsi_create_portal_group", 00:05:34.240 "iscsi_get_portal_groups", 00:05:34.240 "iscsi_delete_target_node", 00:05:34.240 "iscsi_target_node_remove_pg_ig_maps", 00:05:34.241 "iscsi_target_node_add_pg_ig_maps", 00:05:34.241 "iscsi_create_target_node", 00:05:34.241 "iscsi_get_target_nodes", 00:05:34.241 "iscsi_delete_initiator_group", 00:05:34.241 "iscsi_initiator_group_remove_initiators", 00:05:34.241 "iscsi_initiator_group_add_initiators", 00:05:34.241 "iscsi_create_initiator_group", 00:05:34.241 "iscsi_get_initiator_groups", 00:05:34.241 "nvmf_set_crdt", 00:05:34.241 "nvmf_set_config", 00:05:34.241 "nvmf_set_max_subsystems", 00:05:34.241 "nvmf_stop_mdns_prr", 00:05:34.241 "nvmf_publish_mdns_prr", 00:05:34.241 "nvmf_subsystem_get_listeners", 00:05:34.241 "nvmf_subsystem_get_qpairs", 00:05:34.241 "nvmf_subsystem_get_controllers", 00:05:34.241 "nvmf_get_stats", 00:05:34.241 "nvmf_get_transports", 00:05:34.241 "nvmf_create_transport", 00:05:34.241 "nvmf_get_targets", 00:05:34.241 "nvmf_delete_target", 00:05:34.241 "nvmf_create_target", 00:05:34.241 "nvmf_subsystem_allow_any_host", 00:05:34.241 "nvmf_subsystem_set_keys", 00:05:34.241 "nvmf_subsystem_remove_host", 00:05:34.241 "nvmf_subsystem_add_host", 00:05:34.241 "nvmf_ns_remove_host", 00:05:34.241 "nvmf_ns_add_host", 00:05:34.241 "nvmf_subsystem_remove_ns", 00:05:34.241 "nvmf_subsystem_set_ns_ana_group", 00:05:34.241 "nvmf_subsystem_add_ns", 00:05:34.241 "nvmf_subsystem_listener_set_ana_state", 00:05:34.241 "nvmf_discovery_get_referrals", 00:05:34.241 "nvmf_discovery_remove_referral", 00:05:34.241 "nvmf_discovery_add_referral", 00:05:34.241 "nvmf_subsystem_remove_listener", 00:05:34.241 "nvmf_subsystem_add_listener", 00:05:34.241 "nvmf_delete_subsystem", 00:05:34.241 "nvmf_create_subsystem", 00:05:34.241 "nvmf_get_subsystems", 00:05:34.241 "env_dpdk_get_mem_stats", 00:05:34.241 "nbd_get_disks", 00:05:34.241 "nbd_stop_disk", 00:05:34.241 "nbd_start_disk", 00:05:34.241 "ublk_recover_disk", 00:05:34.241 "ublk_get_disks", 00:05:34.241 "ublk_stop_disk", 00:05:34.241 "ublk_start_disk", 00:05:34.241 "ublk_destroy_target", 00:05:34.241 "ublk_create_target", 00:05:34.241 "virtio_blk_create_transport", 00:05:34.241 "virtio_blk_get_transports", 00:05:34.241 "vhost_controller_set_coalescing", 00:05:34.241 "vhost_get_controllers", 00:05:34.241 "vhost_delete_controller", 00:05:34.241 "vhost_create_blk_controller", 00:05:34.241 "vhost_scsi_controller_remove_target", 00:05:34.241 "vhost_scsi_controller_add_target", 00:05:34.241 "vhost_start_scsi_controller", 00:05:34.241 "vhost_create_scsi_controller", 00:05:34.241 "thread_set_cpumask", 00:05:34.241 "scheduler_set_options", 00:05:34.241 "framework_get_governor", 00:05:34.241 "framework_get_scheduler", 00:05:34.241 "framework_set_scheduler", 00:05:34.241 "framework_get_reactors", 00:05:34.241 "thread_get_io_channels", 00:05:34.241 "thread_get_pollers", 00:05:34.241 "thread_get_stats", 00:05:34.241 "framework_monitor_context_switch", 00:05:34.241 "spdk_kill_instance", 00:05:34.241 "log_enable_timestamps", 00:05:34.241 "log_get_flags", 00:05:34.241 "log_clear_flag", 00:05:34.241 "log_set_flag", 00:05:34.241 "log_get_level", 00:05:34.241 "log_set_level", 00:05:34.241 "log_get_print_level", 00:05:34.241 "log_set_print_level", 00:05:34.241 "framework_enable_cpumask_locks", 00:05:34.241 "framework_disable_cpumask_locks", 00:05:34.241 "framework_wait_init", 00:05:34.241 "framework_start_init", 00:05:34.241 "scsi_get_devices", 00:05:34.241 "bdev_get_histogram", 00:05:34.241 "bdev_enable_histogram", 00:05:34.241 "bdev_set_qos_limit", 00:05:34.241 "bdev_set_qd_sampling_period", 00:05:34.241 "bdev_get_bdevs", 00:05:34.241 "bdev_reset_iostat", 00:05:34.241 "bdev_get_iostat", 00:05:34.241 "bdev_examine", 00:05:34.241 "bdev_wait_for_examine", 00:05:34.241 "bdev_set_options", 00:05:34.241 "accel_get_stats", 00:05:34.241 "accel_set_options", 00:05:34.241 "accel_set_driver", 00:05:34.241 "accel_crypto_key_destroy", 00:05:34.241 "accel_crypto_keys_get", 00:05:34.241 "accel_crypto_key_create", 00:05:34.241 "accel_assign_opc", 00:05:34.241 "accel_get_module_info", 00:05:34.241 "accel_get_opc_assignments", 00:05:34.241 "vmd_rescan", 00:05:34.241 "vmd_remove_device", 00:05:34.241 "vmd_enable", 00:05:34.241 "sock_get_default_impl", 00:05:34.241 "sock_set_default_impl", 00:05:34.241 "sock_impl_set_options", 00:05:34.241 "sock_impl_get_options", 00:05:34.241 "iobuf_get_stats", 00:05:34.241 "iobuf_set_options", 00:05:34.241 "keyring_get_keys", 00:05:34.241 "framework_get_pci_devices", 00:05:34.241 "framework_get_config", 00:05:34.241 "framework_get_subsystems", 00:05:34.241 "fsdev_set_opts", 00:05:34.241 "fsdev_get_opts", 00:05:34.241 "trace_get_info", 00:05:34.241 "trace_get_tpoint_group_mask", 00:05:34.241 "trace_disable_tpoint_group", 00:05:34.241 "trace_enable_tpoint_group", 00:05:34.241 "trace_clear_tpoint_mask", 00:05:34.241 "trace_set_tpoint_mask", 00:05:34.241 "notify_get_notifications", 00:05:34.241 "notify_get_types", 00:05:34.241 "spdk_get_version", 00:05:34.241 "rpc_get_methods" 00:05:34.241 ] 00:05:34.241 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:34.241 18:18:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70248 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70248 ']' 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70248 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70248 00:05:34.241 killing process with pid 70248 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70248' 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70248 00:05:34.241 18:18:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70248 00:05:34.502 00:05:34.502 real 0m1.548s 00:05:34.502 user 0m2.773s 00:05:34.502 sys 0m0.390s 00:05:34.502 18:18:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.502 18:18:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 ************************************ 00:05:34.502 END TEST spdkcli_tcp 00:05:34.502 ************************************ 00:05:34.502 18:18:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.502 18:18:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.502 18:18:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.502 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.502 ************************************ 00:05:34.502 START TEST dpdk_mem_utility 00:05:34.502 ************************************ 00:05:34.502 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.502 * Looking for test storage... 00:05:34.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:34.502 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.502 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.502 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.502 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:34.502 18:18:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.503 18:18:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.503 --rc genhtml_branch_coverage=1 00:05:34.503 --rc genhtml_function_coverage=1 00:05:34.503 --rc genhtml_legend=1 00:05:34.503 --rc geninfo_all_blocks=1 00:05:34.503 --rc geninfo_unexecuted_blocks=1 00:05:34.503 00:05:34.503 ' 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.503 --rc genhtml_branch_coverage=1 00:05:34.503 --rc genhtml_function_coverage=1 00:05:34.503 --rc genhtml_legend=1 00:05:34.503 --rc geninfo_all_blocks=1 00:05:34.503 --rc geninfo_unexecuted_blocks=1 00:05:34.503 00:05:34.503 ' 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.503 --rc genhtml_branch_coverage=1 00:05:34.503 --rc genhtml_function_coverage=1 00:05:34.503 --rc genhtml_legend=1 00:05:34.503 --rc geninfo_all_blocks=1 00:05:34.503 --rc geninfo_unexecuted_blocks=1 00:05:34.503 00:05:34.503 ' 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.503 --rc genhtml_branch_coverage=1 00:05:34.503 --rc genhtml_function_coverage=1 00:05:34.503 --rc genhtml_legend=1 00:05:34.503 --rc geninfo_all_blocks=1 00:05:34.503 --rc geninfo_unexecuted_blocks=1 00:05:34.503 00:05:34.503 ' 00:05:34.503 18:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:34.503 18:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70343 00:05:34.503 18:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70343 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70343 ']' 00:05:34.503 18:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.503 18:18:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.764 [2024-11-29 18:18:54.467114] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:34.764 [2024-11-29 18:18:54.467231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70343 ] 00:05:34.764 [2024-11-29 18:18:54.615586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.764 [2024-11-29 18:18:54.633849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.706 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.706 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:35.706 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.706 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.706 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.706 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.706 { 00:05:35.706 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.706 } 00:05:35.706 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.706 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:35.706 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:35.706 1 heaps totaling size 818.000000 MiB 00:05:35.706 size: 818.000000 MiB heap id: 0 00:05:35.706 end heaps---------- 00:05:35.706 9 mempools totaling size 603.782043 MiB 00:05:35.706 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.706 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.706 size: 100.555481 MiB name: bdev_io_70343 00:05:35.706 size: 50.003479 MiB name: msgpool_70343 00:05:35.706 size: 36.509338 MiB name: fsdev_io_70343 00:05:35.706 size: 21.763794 MiB name: PDU_Pool 00:05:35.706 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.706 size: 4.133484 MiB name: evtpool_70343 00:05:35.706 size: 0.026123 MiB name: Session_Pool 00:05:35.706 end mempools------- 00:05:35.706 6 memzones totaling size 4.142822 MiB 00:05:35.706 size: 1.000366 MiB name: RG_ring_0_70343 00:05:35.706 size: 1.000366 MiB name: RG_ring_1_70343 00:05:35.706 size: 1.000366 MiB name: RG_ring_4_70343 00:05:35.706 size: 1.000366 MiB name: RG_ring_5_70343 00:05:35.706 size: 0.125366 MiB name: RG_ring_2_70343 00:05:35.706 size: 0.015991 MiB name: RG_ring_3_70343 00:05:35.706 end memzones------- 00:05:35.706 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.706 heap id: 0 total size: 818.000000 MiB number of busy elements: 326 number of free elements: 15 00:05:35.706 list of free elements. size: 10.800842 MiB 00:05:35.706 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:35.706 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:35.706 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:35.706 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:35.706 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:35.706 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:35.706 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:35.706 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:35.706 element at address: 0x20001ae00000 with size: 0.566040 MiB 00:05:35.706 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:35.706 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:35.706 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:35.706 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:35.706 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:35.706 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:35.706 list of standard malloc elements. size: 199.270264 MiB 00:05:35.706 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:35.706 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:35.706 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:35.706 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:35.706 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:35.706 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.706 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:35.706 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.706 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:35.706 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:35.706 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:35.706 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae90e80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae90f40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:35.707 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:35.708 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:35.708 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:35.708 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:35.708 list of memzone associated elements. size: 607.928894 MiB 00:05:35.708 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:35.708 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.708 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:35.708 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.708 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:35.708 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70343_0 00:05:35.708 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:35.708 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70343_0 00:05:35.708 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:35.708 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70343_0 00:05:35.708 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:35.708 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.708 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:35.708 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.708 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:35.708 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70343_0 00:05:35.708 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:35.708 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70343 00:05:35.708 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:35.708 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70343 00:05:35.709 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:35.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.709 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:35.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.709 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:35.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.709 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:35.709 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.709 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:35.709 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70343 00:05:35.709 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:35.709 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70343 00:05:35.709 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:35.709 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70343 00:05:35.709 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:35.709 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70343 00:05:35.709 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:35.709 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70343 00:05:35.709 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:35.709 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70343 00:05:35.709 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:35.709 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.709 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:35.709 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.709 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:35.709 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.709 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:35.709 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70343 00:05:35.709 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:35.709 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70343 00:05:35.709 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:35.709 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.709 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:35.709 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.709 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:35.709 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70343 00:05:35.709 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:35.709 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.709 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:35.709 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70343 00:05:35.709 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:35.709 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70343 00:05:35.709 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:35.709 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70343 00:05:35.709 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:35.709 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.709 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.709 18:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70343 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70343 ']' 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70343 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70343 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70343' 00:05:35.709 killing process with pid 70343 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70343 00:05:35.709 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70343 00:05:35.970 ************************************ 00:05:35.970 END TEST dpdk_mem_utility 00:05:35.970 ************************************ 00:05:35.970 00:05:35.970 real 0m1.431s 00:05:35.970 user 0m1.480s 00:05:35.970 sys 0m0.364s 00:05:35.970 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.970 18:18:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 18:18:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.970 18:18:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.970 18:18:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.970 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 START TEST event 00:05:35.970 ************************************ 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.970 * Looking for test storage... 00:05:35.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.970 18:18:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.970 18:18:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.970 18:18:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.970 18:18:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.970 18:18:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.970 18:18:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.970 18:18:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.970 18:18:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.970 18:18:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.970 18:18:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.970 18:18:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.970 18:18:55 event -- scripts/common.sh@344 -- # case "$op" in 00:05:35.970 18:18:55 event -- scripts/common.sh@345 -- # : 1 00:05:35.970 18:18:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.970 18:18:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.970 18:18:55 event -- scripts/common.sh@365 -- # decimal 1 00:05:35.970 18:18:55 event -- scripts/common.sh@353 -- # local d=1 00:05:35.970 18:18:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.970 18:18:55 event -- scripts/common.sh@355 -- # echo 1 00:05:35.970 18:18:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.970 18:18:55 event -- scripts/common.sh@366 -- # decimal 2 00:05:35.970 18:18:55 event -- scripts/common.sh@353 -- # local d=2 00:05:35.970 18:18:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.970 18:18:55 event -- scripts/common.sh@355 -- # echo 2 00:05:35.970 18:18:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.970 18:18:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.970 18:18:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.970 18:18:55 event -- scripts/common.sh@368 -- # return 0 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.970 --rc genhtml_branch_coverage=1 00:05:35.970 --rc genhtml_function_coverage=1 00:05:35.970 --rc genhtml_legend=1 00:05:35.970 --rc geninfo_all_blocks=1 00:05:35.970 --rc geninfo_unexecuted_blocks=1 00:05:35.970 00:05:35.970 ' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.970 --rc genhtml_branch_coverage=1 00:05:35.970 --rc genhtml_function_coverage=1 00:05:35.970 --rc genhtml_legend=1 00:05:35.970 --rc geninfo_all_blocks=1 00:05:35.970 --rc geninfo_unexecuted_blocks=1 00:05:35.970 00:05:35.970 ' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.970 --rc genhtml_branch_coverage=1 00:05:35.970 --rc genhtml_function_coverage=1 00:05:35.970 --rc genhtml_legend=1 00:05:35.970 --rc geninfo_all_blocks=1 00:05:35.970 --rc geninfo_unexecuted_blocks=1 00:05:35.970 00:05:35.970 ' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.970 --rc genhtml_branch_coverage=1 00:05:35.970 --rc genhtml_function_coverage=1 00:05:35.970 --rc genhtml_legend=1 00:05:35.970 --rc geninfo_all_blocks=1 00:05:35.970 --rc geninfo_unexecuted_blocks=1 00:05:35.970 00:05:35.970 ' 00:05:35.970 18:18:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:35.970 18:18:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.970 18:18:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:35.970 18:18:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.970 18:18:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.231 ************************************ 00:05:36.231 START TEST event_perf 00:05:36.231 ************************************ 00:05:36.231 18:18:55 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.231 Running I/O for 1 seconds...[2024-11-29 18:18:55.905581] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:36.232 [2024-11-29 18:18:55.905774] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:05:36.232 [2024-11-29 18:18:56.061822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.232 [2024-11-29 18:18:56.084634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.232 [2024-11-29 18:18:56.084960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.232 [2024-11-29 18:18:56.085369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.232 [2024-11-29 18:18:56.085386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.618 Running I/O for 1 seconds... 00:05:37.618 lcore 0: 196625 00:05:37.618 lcore 1: 196627 00:05:37.618 lcore 2: 196627 00:05:37.618 lcore 3: 196624 00:05:37.618 done. 00:05:37.618 00:05:37.618 real 0m1.252s 00:05:37.618 user 0m4.062s 00:05:37.618 sys 0m0.073s 00:05:37.618 ************************************ 00:05:37.618 END TEST event_perf 00:05:37.618 ************************************ 00:05:37.618 18:18:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.618 18:18:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.618 18:18:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.618 18:18:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.618 18:18:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.618 18:18:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.618 ************************************ 00:05:37.618 START TEST event_reactor 00:05:37.618 ************************************ 00:05:37.618 18:18:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.618 [2024-11-29 18:18:57.188124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:37.618 [2024-11-29 18:18:57.188323] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70463 ] 00:05:37.618 [2024-11-29 18:18:57.345332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.618 [2024-11-29 18:18:57.364518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.562 test_start 00:05:38.562 oneshot 00:05:38.562 tick 100 00:05:38.562 tick 100 00:05:38.562 tick 250 00:05:38.562 tick 100 00:05:38.562 tick 100 00:05:38.562 tick 100 00:05:38.562 tick 250 00:05:38.562 tick 500 00:05:38.562 tick 100 00:05:38.562 tick 100 00:05:38.562 tick 250 00:05:38.562 tick 100 00:05:38.562 tick 100 00:05:38.562 test_end 00:05:38.562 00:05:38.562 real 0m1.243s 00:05:38.562 user 0m1.074s 00:05:38.562 sys 0m0.062s 00:05:38.562 18:18:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.562 18:18:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:38.562 ************************************ 00:05:38.562 END TEST event_reactor 00:05:38.562 ************************************ 00:05:38.562 18:18:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.562 18:18:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:38.562 18:18:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.562 18:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.562 ************************************ 00:05:38.562 START TEST event_reactor_perf 00:05:38.562 ************************************ 00:05:38.562 18:18:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.822 [2024-11-29 18:18:58.474781] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:38.822 [2024-11-29 18:18:58.474899] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:05:38.822 [2024-11-29 18:18:58.632341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.822 [2024-11-29 18:18:58.650925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.206 test_start 00:05:40.206 test_end 00:05:40.206 Performance: 316947 events per second 00:05:40.206 00:05:40.206 real 0m1.240s 00:05:40.206 user 0m1.079s 00:05:40.206 sys 0m0.055s 00:05:40.206 18:18:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.206 18:18:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.206 ************************************ 00:05:40.206 END TEST event_reactor_perf 00:05:40.206 ************************************ 00:05:40.206 18:18:59 event -- event/event.sh@49 -- # uname -s 00:05:40.206 18:18:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:40.206 18:18:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.206 18:18:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.206 18:18:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.206 18:18:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.206 ************************************ 00:05:40.206 START TEST event_scheduler 00:05:40.206 ************************************ 00:05:40.206 18:18:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.206 * Looking for test storage... 00:05:40.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:40.206 18:18:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.207 18:18:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.207 --rc genhtml_branch_coverage=1 00:05:40.207 --rc genhtml_function_coverage=1 00:05:40.207 --rc genhtml_legend=1 00:05:40.207 --rc geninfo_all_blocks=1 00:05:40.207 --rc geninfo_unexecuted_blocks=1 00:05:40.207 00:05:40.207 ' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.207 --rc genhtml_branch_coverage=1 00:05:40.207 --rc genhtml_function_coverage=1 00:05:40.207 --rc genhtml_legend=1 00:05:40.207 --rc geninfo_all_blocks=1 00:05:40.207 --rc geninfo_unexecuted_blocks=1 00:05:40.207 00:05:40.207 ' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.207 --rc genhtml_branch_coverage=1 00:05:40.207 --rc genhtml_function_coverage=1 00:05:40.207 --rc genhtml_legend=1 00:05:40.207 --rc geninfo_all_blocks=1 00:05:40.207 --rc geninfo_unexecuted_blocks=1 00:05:40.207 00:05:40.207 ' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.207 --rc genhtml_branch_coverage=1 00:05:40.207 --rc genhtml_function_coverage=1 00:05:40.207 --rc genhtml_legend=1 00:05:40.207 --rc geninfo_all_blocks=1 00:05:40.207 --rc geninfo_unexecuted_blocks=1 00:05:40.207 00:05:40.207 ' 00:05:40.207 18:18:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.207 18:18:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70564 00:05:40.207 18:18:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.207 18:18:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70564 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70564 ']' 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.207 18:18:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.207 18:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.207 [2024-11-29 18:18:59.932067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:40.207 [2024-11-29 18:18:59.932363] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70564 ] 00:05:40.207 [2024-11-29 18:19:00.089765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.468 [2024-11-29 18:19:00.112349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.468 [2024-11-29 18:19:00.112691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.468 [2024-11-29 18:19:00.113067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.468 [2024-11-29 18:19:00.113117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:41.041 18:19:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.041 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.041 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.041 POWER: Cannot set governor of lcore 0 to performance 00:05:41.041 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.041 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.041 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.041 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.041 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:41.041 POWER: Unable to set Power Management Environment for lcore 0 00:05:41.041 [2024-11-29 18:19:00.791147] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:41.041 [2024-11-29 18:19:00.791180] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:41.041 [2024-11-29 18:19:00.791229] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:41.041 [2024-11-29 18:19:00.791429] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:41.041 [2024-11-29 18:19:00.791465] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:41.041 [2024-11-29 18:19:00.791513] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 [2024-11-29 18:19:00.845444] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 ************************************ 00:05:41.041 START TEST scheduler_create_thread 00:05:41.041 ************************************ 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 2 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 3 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 4 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 5 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 6 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 7 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.041 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.041 8 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.042 9 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.042 10 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.042 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.303 18:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.564 ************************************ 00:05:41.564 END TEST scheduler_create_thread 00:05:41.564 ************************************ 00:05:41.564 18:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.564 00:05:41.564 real 0m0.588s 00:05:41.564 user 0m0.014s 00:05:41.564 sys 0m0.002s 00:05:41.564 18:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.564 18:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.826 18:19:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.826 18:19:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70564 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70564 ']' 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70564 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70564 00:05:41.826 killing process with pid 70564 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70564' 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70564 00:05:41.826 18:19:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70564 00:05:42.086 [2024-11-29 18:19:01.923305] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:42.348 00:05:42.348 real 0m2.323s 00:05:42.348 user 0m4.585s 00:05:42.348 sys 0m0.318s 00:05:42.348 18:19:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.348 ************************************ 00:05:42.348 END TEST event_scheduler 00:05:42.348 ************************************ 00:05:42.348 18:19:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.348 18:19:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:42.348 18:19:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:42.348 18:19:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.348 18:19:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.348 18:19:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.348 ************************************ 00:05:42.348 START TEST app_repeat 00:05:42.348 ************************************ 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:42.348 Process app_repeat pid: 70637 00:05:42.348 spdk_app_start Round 0 00:05:42.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70637 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70637' 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70637 /var/tmp/spdk-nbd.sock 00:05:42.348 18:19:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70637 ']' 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.348 18:19:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.348 [2024-11-29 18:19:02.129924] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:42.348 [2024-11-29 18:19:02.130044] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70637 ] 00:05:42.608 [2024-11-29 18:19:02.287059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.608 [2024-11-29 18:19:02.306940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.608 [2024-11-29 18:19:02.307010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.245 18:19:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.246 18:19:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.246 18:19:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.514 Malloc0 00:05:43.514 18:19:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.776 Malloc1 00:05:43.776 18:19:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.776 18:19:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.777 18:19:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.038 /dev/nbd0 00:05:44.038 18:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.038 18:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.038 1+0 records in 00:05:44.038 1+0 records out 00:05:44.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725743 s, 5.6 MB/s 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.038 18:19:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.038 18:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.038 18:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.038 18:19:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.301 /dev/nbd1 00:05:44.301 18:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.301 18:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.301 1+0 records in 00:05:44.301 1+0 records out 00:05:44.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367517 s, 11.1 MB/s 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.301 18:19:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.301 18:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.301 18:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.301 18:19:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.301 18:19:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.301 18:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.563 { 00:05:44.563 "nbd_device": "/dev/nbd0", 00:05:44.563 "bdev_name": "Malloc0" 00:05:44.563 }, 00:05:44.563 { 00:05:44.563 "nbd_device": "/dev/nbd1", 00:05:44.563 "bdev_name": "Malloc1" 00:05:44.563 } 00:05:44.563 ]' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.563 { 00:05:44.563 "nbd_device": "/dev/nbd0", 00:05:44.563 "bdev_name": "Malloc0" 00:05:44.563 }, 00:05:44.563 { 00:05:44.563 "nbd_device": "/dev/nbd1", 00:05:44.563 "bdev_name": "Malloc1" 00:05:44.563 } 00:05:44.563 ]' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.563 /dev/nbd1' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.563 /dev/nbd1' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.563 256+0 records in 00:05:44.563 256+0 records out 00:05:44.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918115 s, 114 MB/s 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.563 256+0 records in 00:05:44.563 256+0 records out 00:05:44.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366895 s, 28.6 MB/s 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.563 18:19:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.563 256+0 records in 00:05:44.564 256+0 records out 00:05:44.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213797 s, 49.0 MB/s 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.564 18:19:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.825 18:19:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.086 18:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.347 18:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.347 18:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.347 18:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.347 18:19:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.347 18:19:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.607 18:19:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.607 [2024-11-29 18:19:05.339234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.607 [2024-11-29 18:19:05.359155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.607 [2024-11-29 18:19:05.359262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.607 [2024-11-29 18:19:05.392185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.607 [2024-11-29 18:19:05.392236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.908 spdk_app_start Round 1 00:05:48.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.908 18:19:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.908 18:19:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.908 18:19:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70637 /var/tmp/spdk-nbd.sock 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70637 ']' 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.908 18:19:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.908 18:19:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.908 Malloc0 00:05:48.908 18:19:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.169 Malloc1 00:05:49.169 18:19:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.169 18:19:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.430 /dev/nbd0 00:05:49.431 18:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.431 18:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.431 1+0 records in 00:05:49.431 1+0 records out 00:05:49.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236955 s, 17.3 MB/s 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.431 18:19:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.431 18:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.431 18:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.431 18:19:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.431 /dev/nbd1 00:05:49.691 18:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.691 18:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.691 18:19:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.691 18:19:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.691 18:19:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.692 1+0 records in 00:05:49.692 1+0 records out 00:05:49.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269358 s, 15.2 MB/s 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.692 18:19:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.692 { 00:05:49.692 "nbd_device": "/dev/nbd0", 00:05:49.692 "bdev_name": "Malloc0" 00:05:49.692 }, 00:05:49.692 { 00:05:49.692 "nbd_device": "/dev/nbd1", 00:05:49.692 "bdev_name": "Malloc1" 00:05:49.692 } 00:05:49.692 ]' 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.692 { 00:05:49.692 "nbd_device": "/dev/nbd0", 00:05:49.692 "bdev_name": "Malloc0" 00:05:49.692 }, 00:05:49.692 { 00:05:49.692 "nbd_device": "/dev/nbd1", 00:05:49.692 "bdev_name": "Malloc1" 00:05:49.692 } 00:05:49.692 ]' 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.692 /dev/nbd1' 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.692 18:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.692 /dev/nbd1' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.953 256+0 records in 00:05:49.953 256+0 records out 00:05:49.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695758 s, 151 MB/s 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.953 256+0 records in 00:05:49.953 256+0 records out 00:05:49.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169145 s, 62.0 MB/s 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.953 256+0 records in 00:05:49.953 256+0 records out 00:05:49.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193521 s, 54.2 MB/s 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.953 18:19:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.214 18:19:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.214 18:19:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.476 18:19:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.476 18:19:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.735 18:19:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.735 [2024-11-29 18:19:10.609959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.735 [2024-11-29 18:19:10.626088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.735 [2024-11-29 18:19:10.626155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.999 [2024-11-29 18:19:10.655118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.999 [2024-11-29 18:19:10.655164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.306 18:19:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.306 spdk_app_start Round 2 00:05:54.306 18:19:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.306 18:19:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70637 /var/tmp/spdk-nbd.sock 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70637 ']' 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.306 18:19:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.306 18:19:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.306 Malloc0 00:05:54.306 18:19:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.306 Malloc1 00:05:54.306 18:19:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.306 18:19:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.307 18:19:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.564 /dev/nbd0 00:05:54.564 18:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.564 18:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.564 1+0 records in 00:05:54.564 1+0 records out 00:05:54.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221358 s, 18.5 MB/s 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.564 18:19:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.565 18:19:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.565 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.565 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.565 18:19:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.822 /dev/nbd1 00:05:54.822 18:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.822 18:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.822 18:19:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.822 1+0 records in 00:05:54.822 1+0 records out 00:05:54.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197465 s, 20.7 MB/s 00:05:54.823 18:19:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.823 18:19:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.823 18:19:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.823 18:19:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.823 18:19:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.823 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.823 18:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.823 18:19:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.823 18:19:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.823 18:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.080 { 00:05:55.080 "nbd_device": "/dev/nbd0", 00:05:55.080 "bdev_name": "Malloc0" 00:05:55.080 }, 00:05:55.080 { 00:05:55.080 "nbd_device": "/dev/nbd1", 00:05:55.080 "bdev_name": "Malloc1" 00:05:55.080 } 00:05:55.080 ]' 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.080 { 00:05:55.080 "nbd_device": "/dev/nbd0", 00:05:55.080 "bdev_name": "Malloc0" 00:05:55.080 }, 00:05:55.080 { 00:05:55.080 "nbd_device": "/dev/nbd1", 00:05:55.080 "bdev_name": "Malloc1" 00:05:55.080 } 00:05:55.080 ]' 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.080 /dev/nbd1' 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.080 /dev/nbd1' 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.080 18:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.081 256+0 records in 00:05:55.081 256+0 records out 00:05:55.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00881944 s, 119 MB/s 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.081 256+0 records in 00:05:55.081 256+0 records out 00:05:55.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150292 s, 69.8 MB/s 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.081 256+0 records in 00:05:55.081 256+0 records out 00:05:55.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172345 s, 60.8 MB/s 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.081 18:19:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.349 18:19:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.607 18:19:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.866 18:19:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.866 18:19:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.125 18:19:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.125 [2024-11-29 18:19:15.897500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.125 [2024-11-29 18:19:15.912375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.125 [2024-11-29 18:19:15.912449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.125 [2024-11-29 18:19:15.941154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.125 [2024-11-29 18:19:15.941197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.408 18:19:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70637 /var/tmp/spdk-nbd.sock 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70637 ']' 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.408 18:19:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.408 18:19:19 event.app_repeat -- event/event.sh@39 -- # killprocess 70637 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70637 ']' 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70637 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70637 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.408 killing process with pid 70637 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70637' 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70637 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70637 00:05:59.408 spdk_app_start is called in Round 0. 00:05:59.408 Shutdown signal received, stop current app iteration 00:05:59.408 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:05:59.408 spdk_app_start is called in Round 1. 00:05:59.408 Shutdown signal received, stop current app iteration 00:05:59.408 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:05:59.408 spdk_app_start is called in Round 2. 00:05:59.408 Shutdown signal received, stop current app iteration 00:05:59.408 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:05:59.408 spdk_app_start is called in Round 3. 00:05:59.408 Shutdown signal received, stop current app iteration 00:05:59.408 18:19:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.408 18:19:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:59.408 00:05:59.408 real 0m17.077s 00:05:59.408 user 0m38.250s 00:05:59.408 sys 0m2.142s 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.408 18:19:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.408 ************************************ 00:05:59.408 END TEST app_repeat 00:05:59.408 ************************************ 00:05:59.408 18:19:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.408 18:19:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.408 18:19:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.408 18:19:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.408 18:19:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.408 ************************************ 00:05:59.408 START TEST cpu_locks 00:05:59.408 ************************************ 00:05:59.408 18:19:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.408 * Looking for test storage... 00:05:59.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.408 18:19:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.408 18:19:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.408 18:19:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.667 18:19:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.667 --rc genhtml_branch_coverage=1 00:05:59.667 --rc genhtml_function_coverage=1 00:05:59.667 --rc genhtml_legend=1 00:05:59.667 --rc geninfo_all_blocks=1 00:05:59.667 --rc geninfo_unexecuted_blocks=1 00:05:59.667 00:05:59.667 ' 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.667 --rc genhtml_branch_coverage=1 00:05:59.667 --rc genhtml_function_coverage=1 00:05:59.667 --rc genhtml_legend=1 00:05:59.667 --rc geninfo_all_blocks=1 00:05:59.667 --rc geninfo_unexecuted_blocks=1 00:05:59.667 00:05:59.667 ' 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.667 --rc genhtml_branch_coverage=1 00:05:59.667 --rc genhtml_function_coverage=1 00:05:59.667 --rc genhtml_legend=1 00:05:59.667 --rc geninfo_all_blocks=1 00:05:59.667 --rc geninfo_unexecuted_blocks=1 00:05:59.667 00:05:59.667 ' 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.667 --rc genhtml_branch_coverage=1 00:05:59.667 --rc genhtml_function_coverage=1 00:05:59.667 --rc genhtml_legend=1 00:05:59.667 --rc geninfo_all_blocks=1 00:05:59.667 --rc geninfo_unexecuted_blocks=1 00:05:59.667 00:05:59.667 ' 00:05:59.667 18:19:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:59.667 18:19:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:59.667 18:19:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:59.667 18:19:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.667 18:19:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.667 ************************************ 00:05:59.667 START TEST default_locks 00:05:59.667 ************************************ 00:05:59.667 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:59.667 18:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71062 00:05:59.667 18:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71062 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71062 ']' 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.668 18:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.668 [2024-11-29 18:19:19.462409] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:59.668 [2024-11-29 18:19:19.462532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71062 ] 00:05:59.925 [2024-11-29 18:19:19.619107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.925 [2024-11-29 18:19:19.646709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.489 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.489 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:00.489 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71062 00:06:00.489 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71062 00:06:00.490 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71062 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71062 ']' 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71062 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71062 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.747 killing process with pid 71062 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71062' 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71062 00:06:00.747 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71062 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71062 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71062 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71062 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71062 ']' 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.006 ERROR: process (pid: 71062) is no longer running 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71062) - No such process 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.006 00:06:01.006 real 0m1.481s 00:06:01.006 user 0m1.485s 00:06:01.006 sys 0m0.466s 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.006 ************************************ 00:06:01.006 END TEST default_locks 00:06:01.006 ************************************ 00:06:01.006 18:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.264 18:19:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.264 18:19:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.264 18:19:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.264 18:19:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.264 ************************************ 00:06:01.264 START TEST default_locks_via_rpc 00:06:01.264 ************************************ 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71110 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71110 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71110 ']' 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.264 18:19:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.264 [2024-11-29 18:19:21.015531] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:01.264 [2024-11-29 18:19:21.015650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71110 ] 00:06:01.522 [2024-11-29 18:19:21.171688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.522 [2024-11-29 18:19:21.194338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71110 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71110 00:06:02.087 18:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71110 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71110 ']' 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71110 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71110 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.345 killing process with pid 71110 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71110' 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71110 00:06:02.345 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71110 00:06:02.603 00:06:02.603 real 0m1.393s 00:06:02.603 user 0m1.385s 00:06:02.603 sys 0m0.452s 00:06:02.603 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.603 ************************************ 00:06:02.603 END TEST default_locks_via_rpc 00:06:02.603 ************************************ 00:06:02.604 18:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.604 18:19:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.604 18:19:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.604 18:19:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.604 18:19:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.604 ************************************ 00:06:02.604 START TEST non_locking_app_on_locked_coremask 00:06:02.604 ************************************ 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71156 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71156 /var/tmp/spdk.sock 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71156 ']' 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.604 18:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.604 [2024-11-29 18:19:22.473967] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:02.604 [2024-11-29 18:19:22.474083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:06:02.861 [2024-11-29 18:19:22.629555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.861 [2024-11-29 18:19:22.649682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71167 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71167 /var/tmp/spdk2.sock 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71167 ']' 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.426 18:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.685 [2024-11-29 18:19:23.371766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:03.685 [2024-11-29 18:19:23.371886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71167 ] 00:06:03.685 [2024-11-29 18:19:23.543329] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.685 [2024-11-29 18:19:23.543381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.685 [2024-11-29 18:19:23.582053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71156 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71156 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.621 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71156 00:06:04.622 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71156 ']' 00:06:04.622 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71156 00:06:04.622 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.622 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.622 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71156 00:06:04.880 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.880 killing process with pid 71156 00:06:04.880 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.880 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71156' 00:06:04.880 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71156 00:06:04.880 18:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71156 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71167 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71167 ']' 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71167 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71167 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.456 killing process with pid 71167 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71167' 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71167 00:06:05.456 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71167 00:06:05.723 00:06:05.723 real 0m3.198s 00:06:05.723 user 0m3.490s 00:06:05.723 sys 0m0.788s 00:06:05.723 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.723 18:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.723 ************************************ 00:06:05.723 END TEST non_locking_app_on_locked_coremask 00:06:05.723 ************************************ 00:06:05.985 18:19:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.985 18:19:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.985 18:19:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.985 18:19:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.985 ************************************ 00:06:05.985 START TEST locking_app_on_unlocked_coremask 00:06:05.985 ************************************ 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71230 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71230 /var/tmp/spdk.sock 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71230 ']' 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.985 18:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.985 [2024-11-29 18:19:25.720922] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:05.985 [2024-11-29 18:19:25.721033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 00:06:05.985 [2024-11-29 18:19:25.876526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.985 [2024-11-29 18:19:25.876582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.246 [2024-11-29 18:19:25.895916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71246 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71246 /var/tmp/spdk2.sock 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71246 ']' 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.818 18:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.818 [2024-11-29 18:19:26.617706] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:06.818 [2024-11-29 18:19:26.617996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71246 ] 00:06:07.080 [2024-11-29 18:19:26.790646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.080 [2024-11-29 18:19:26.829555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.652 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.652 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.652 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71246 00:06:07.652 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71246 00:06:07.652 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71230 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71230 ']' 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71230 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71230 00:06:07.913 killing process with pid 71230 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71230' 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71230 00:06:07.913 18:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71230 00:06:08.485 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71246 00:06:08.485 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71246 ']' 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71246 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71246 00:06:08.486 killing process with pid 71246 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71246' 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71246 00:06:08.486 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71246 00:06:08.748 ************************************ 00:06:08.748 END TEST locking_app_on_unlocked_coremask 00:06:08.748 ************************************ 00:06:08.748 00:06:08.748 real 0m2.923s 00:06:08.748 user 0m3.242s 00:06:08.748 sys 0m0.767s 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.748 18:19:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.748 18:19:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.748 18:19:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.748 18:19:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.748 ************************************ 00:06:08.748 START TEST locking_app_on_locked_coremask 00:06:08.748 ************************************ 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71304 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71304 /var/tmp/spdk.sock 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71304 ']' 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.748 18:19:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.009 [2024-11-29 18:19:28.689190] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:09.009 [2024-11-29 18:19:28.689321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71304 ] 00:06:09.009 [2024-11-29 18:19:28.851507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.009 [2024-11-29 18:19:28.870307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71315 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71315 /var/tmp/spdk2.sock 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:09.943 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71315 /var/tmp/spdk2.sock 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71315 /var/tmp/spdk2.sock 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71315 ']' 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.944 18:19:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.944 [2024-11-29 18:19:29.592075] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:09.944 [2024-11-29 18:19:29.592445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71315 ] 00:06:09.944 [2024-11-29 18:19:29.764550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71304 has claimed it. 00:06:09.944 [2024-11-29 18:19:29.764605] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.510 ERROR: process (pid: 71315) is no longer running 00:06:10.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71315) - No such process 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71304 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71304 00:06:10.510 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71304 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71304 ']' 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71304 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71304 00:06:10.768 killing process with pid 71304 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71304' 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71304 00:06:10.768 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71304 00:06:11.026 00:06:11.026 real 0m2.131s 00:06:11.026 user 0m2.375s 00:06:11.026 sys 0m0.539s 00:06:11.026 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.026 ************************************ 00:06:11.026 18:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.026 END TEST locking_app_on_locked_coremask 00:06:11.026 ************************************ 00:06:11.026 18:19:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:11.026 18:19:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.026 18:19:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.026 18:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.026 ************************************ 00:06:11.026 START TEST locking_overlapped_coremask 00:06:11.026 ************************************ 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71362 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71362 /var/tmp/spdk.sock 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71362 ']' 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:11.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.026 18:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.026 [2024-11-29 18:19:30.863928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:11.026 [2024-11-29 18:19:30.864055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71362 ] 00:06:11.284 [2024-11-29 18:19:31.019918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.284 [2024-11-29 18:19:31.041247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.284 [2024-11-29 18:19:31.041654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.284 [2024-11-29 18:19:31.041686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.848 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.848 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.848 18:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71380 00:06:11.848 18:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71380 /var/tmp/spdk2.sock 00:06:11.848 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71380 /var/tmp/spdk2.sock 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71380 /var/tmp/spdk2.sock 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71380 ']' 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.849 18:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.108 [2024-11-29 18:19:31.815120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:12.108 [2024-11-29 18:19:31.815266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71380 ] 00:06:12.108 [2024-11-29 18:19:31.992477] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71362 has claimed it. 00:06:12.108 [2024-11-29 18:19:31.992545] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.700 ERROR: process (pid: 71380) is no longer running 00:06:12.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71380) - No such process 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71362 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71362 ']' 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71362 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71362 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.700 killing process with pid 71362 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71362' 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71362 00:06:12.700 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71362 00:06:12.959 00:06:12.959 real 0m1.955s 00:06:12.959 user 0m5.430s 00:06:12.959 sys 0m0.413s 00:06:12.959 ************************************ 00:06:12.959 END TEST locking_overlapped_coremask 00:06:12.959 ************************************ 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.959 18:19:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.959 18:19:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.959 18:19:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.959 18:19:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.959 ************************************ 00:06:12.959 START TEST locking_overlapped_coremask_via_rpc 00:06:12.959 ************************************ 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71422 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71422 /var/tmp/spdk.sock 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71422 ']' 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.959 18:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.959 [2024-11-29 18:19:32.859196] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:12.959 [2024-11-29 18:19:32.859316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71422 ] 00:06:13.218 [2024-11-29 18:19:33.009574] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.218 [2024-11-29 18:19:33.009637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.218 [2024-11-29 18:19:33.031424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.218 [2024-11-29 18:19:33.031595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.218 [2024-11-29 18:19:33.031678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71435 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71435 /var/tmp/spdk2.sock 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71435 ']' 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.152 18:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.152 [2024-11-29 18:19:33.763256] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:14.152 [2024-11-29 18:19:33.763374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71435 ] 00:06:14.152 [2024-11-29 18:19:33.936848] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.152 [2024-11-29 18:19:33.936896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.152 [2024-11-29 18:19:33.981631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.152 [2024-11-29 18:19:33.981654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.153 [2024-11-29 18:19:33.981705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.720 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.720 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.720 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.720 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.720 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.979 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.979 [2024-11-29 18:19:34.641700] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71422 has claimed it. 00:06:14.979 request: 00:06:14.979 { 00:06:14.979 "method": "framework_enable_cpumask_locks", 00:06:14.979 "req_id": 1 00:06:14.979 } 00:06:14.979 Got JSON-RPC error response 00:06:14.979 response: 00:06:14.979 { 00:06:14.979 "code": -32603, 00:06:14.979 "message": "Failed to claim CPU core: 2" 00:06:14.979 } 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71422 /var/tmp/spdk.sock 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71422 ']' 00:06:14.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71435 /var/tmp/spdk2.sock 00:06:14.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71435 ']' 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.980 18:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.241 ************************************ 00:06:15.241 END TEST locking_overlapped_coremask_via_rpc 00:06:15.241 ************************************ 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.241 00:06:15.241 real 0m2.305s 00:06:15.241 user 0m1.076s 00:06:15.241 sys 0m0.146s 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.241 18:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.241 18:19:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:15.241 18:19:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71422 ]] 00:06:15.241 18:19:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71422 00:06:15.241 18:19:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71422 ']' 00:06:15.241 18:19:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71422 00:06:15.241 18:19:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.241 18:19:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.241 18:19:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71422 00:06:15.499 18:19:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.499 18:19:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.499 killing process with pid 71422 00:06:15.499 18:19:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71422' 00:06:15.499 18:19:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71422 00:06:15.499 18:19:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71422 00:06:15.757 18:19:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71435 ]] 00:06:15.758 18:19:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71435 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71435 ']' 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71435 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71435 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:15.758 killing process with pid 71435 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71435' 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71435 00:06:15.758 18:19:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71435 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71422 ]] 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71422 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71422 ']' 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71422 00:06:16.016 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71422) - No such process 00:06:16.016 Process with pid 71422 is not found 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71422 is not found' 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71435 ]] 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71435 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71435 ']' 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71435 00:06:16.016 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71435) - No such process 00:06:16.016 Process with pid 71435 is not found 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71435 is not found' 00:06:16.016 18:19:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.016 ************************************ 00:06:16.016 END TEST cpu_locks 00:06:16.016 ************************************ 00:06:16.016 00:06:16.016 real 0m16.489s 00:06:16.016 user 0m29.098s 00:06:16.016 sys 0m4.317s 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.016 18:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.016 00:06:16.016 real 0m40.044s 00:06:16.016 user 1m18.312s 00:06:16.016 sys 0m7.187s 00:06:16.016 18:19:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.016 18:19:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.016 ************************************ 00:06:16.016 END TEST event 00:06:16.016 ************************************ 00:06:16.016 18:19:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.016 18:19:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.016 18:19:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.016 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.016 ************************************ 00:06:16.016 START TEST thread 00:06:16.016 ************************************ 00:06:16.016 18:19:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.016 * Looking for test storage... 00:06:16.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.016 18:19:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.016 18:19:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.016 18:19:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.275 18:19:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.275 18:19:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.275 18:19:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.275 18:19:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.275 18:19:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.275 18:19:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.275 18:19:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.275 18:19:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.275 18:19:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.275 18:19:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.275 18:19:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.275 18:19:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.275 18:19:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:16.275 18:19:35 thread -- scripts/common.sh@345 -- # : 1 00:06:16.275 18:19:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.275 18:19:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.275 18:19:35 thread -- scripts/common.sh@365 -- # decimal 1 00:06:16.275 18:19:35 thread -- scripts/common.sh@353 -- # local d=1 00:06:16.275 18:19:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.275 18:19:35 thread -- scripts/common.sh@355 -- # echo 1 00:06:16.275 18:19:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.275 18:19:35 thread -- scripts/common.sh@366 -- # decimal 2 00:06:16.275 18:19:35 thread -- scripts/common.sh@353 -- # local d=2 00:06:16.275 18:19:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.275 18:19:35 thread -- scripts/common.sh@355 -- # echo 2 00:06:16.275 18:19:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.275 18:19:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.275 18:19:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.275 18:19:35 thread -- scripts/common.sh@368 -- # return 0 00:06:16.275 18:19:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.275 18:19:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.275 --rc genhtml_branch_coverage=1 00:06:16.275 --rc genhtml_function_coverage=1 00:06:16.275 --rc genhtml_legend=1 00:06:16.275 --rc geninfo_all_blocks=1 00:06:16.275 --rc geninfo_unexecuted_blocks=1 00:06:16.275 00:06:16.275 ' 00:06:16.275 18:19:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.275 --rc genhtml_branch_coverage=1 00:06:16.275 --rc genhtml_function_coverage=1 00:06:16.276 --rc genhtml_legend=1 00:06:16.276 --rc geninfo_all_blocks=1 00:06:16.276 --rc geninfo_unexecuted_blocks=1 00:06:16.276 00:06:16.276 ' 00:06:16.276 18:19:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.276 --rc genhtml_branch_coverage=1 00:06:16.276 --rc genhtml_function_coverage=1 00:06:16.276 --rc genhtml_legend=1 00:06:16.276 --rc geninfo_all_blocks=1 00:06:16.276 --rc geninfo_unexecuted_blocks=1 00:06:16.276 00:06:16.276 ' 00:06:16.276 18:19:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.276 --rc genhtml_branch_coverage=1 00:06:16.276 --rc genhtml_function_coverage=1 00:06:16.276 --rc genhtml_legend=1 00:06:16.276 --rc geninfo_all_blocks=1 00:06:16.276 --rc geninfo_unexecuted_blocks=1 00:06:16.276 00:06:16.276 ' 00:06:16.276 18:19:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.276 18:19:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.276 18:19:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.276 18:19:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.276 ************************************ 00:06:16.276 START TEST thread_poller_perf 00:06:16.276 ************************************ 00:06:16.276 18:19:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.276 [2024-11-29 18:19:36.000410] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:16.276 [2024-11-29 18:19:36.000874] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71573 ] 00:06:16.276 [2024-11-29 18:19:36.158820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.276 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.276 [2024-11-29 18:19:36.177700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.650 [2024-11-29T18:19:37.555Z] ====================================== 00:06:17.650 [2024-11-29T18:19:37.555Z] busy:2612594562 (cyc) 00:06:17.650 [2024-11-29T18:19:37.555Z] total_run_count: 306000 00:06:17.650 [2024-11-29T18:19:37.555Z] tsc_hz: 2600000000 (cyc) 00:06:17.650 [2024-11-29T18:19:37.555Z] ====================================== 00:06:17.650 [2024-11-29T18:19:37.555Z] poller_cost: 8537 (cyc), 3283 (nsec) 00:06:17.650 00:06:17.650 real 0m1.253s 00:06:17.650 user 0m1.092s 00:06:17.650 sys 0m0.054s 00:06:17.650 18:19:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.650 18:19:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.650 ************************************ 00:06:17.650 END TEST thread_poller_perf 00:06:17.650 ************************************ 00:06:17.650 18:19:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.650 18:19:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:17.650 18:19:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.650 18:19:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.651 ************************************ 00:06:17.651 START TEST thread_poller_perf 00:06:17.651 ************************************ 00:06:17.651 18:19:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.651 [2024-11-29 18:19:37.318038] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:17.651 [2024-11-29 18:19:37.318147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71604 ] 00:06:17.651 [2024-11-29 18:19:37.475720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.651 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.651 [2024-11-29 18:19:37.494546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.023 [2024-11-29T18:19:38.928Z] ====================================== 00:06:19.023 [2024-11-29T18:19:38.928Z] busy:2603323182 (cyc) 00:06:19.023 [2024-11-29T18:19:38.928Z] total_run_count: 3969000 00:06:19.023 [2024-11-29T18:19:38.928Z] tsc_hz: 2600000000 (cyc) 00:06:19.023 [2024-11-29T18:19:38.928Z] ====================================== 00:06:19.023 [2024-11-29T18:19:38.928Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:19.023 00:06:19.023 real 0m1.251s 00:06:19.023 user 0m1.080s 00:06:19.023 sys 0m0.064s 00:06:19.023 ************************************ 00:06:19.023 END TEST thread_poller_perf 00:06:19.023 ************************************ 00:06:19.023 18:19:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.023 18:19:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.023 18:19:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.023 00:06:19.023 real 0m2.764s 00:06:19.023 user 0m2.276s 00:06:19.023 sys 0m0.239s 00:06:19.023 18:19:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.023 18:19:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.023 ************************************ 00:06:19.023 END TEST thread 00:06:19.023 ************************************ 00:06:19.023 18:19:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:19.023 18:19:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.023 18:19:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.023 18:19:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.023 18:19:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.023 ************************************ 00:06:19.023 START TEST app_cmdline 00:06:19.023 ************************************ 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.023 * Looking for test storage... 00:06:19.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.023 18:19:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.023 --rc genhtml_branch_coverage=1 00:06:19.023 --rc genhtml_function_coverage=1 00:06:19.023 --rc genhtml_legend=1 00:06:19.023 --rc geninfo_all_blocks=1 00:06:19.023 --rc geninfo_unexecuted_blocks=1 00:06:19.023 00:06:19.023 ' 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.023 --rc genhtml_branch_coverage=1 00:06:19.023 --rc genhtml_function_coverage=1 00:06:19.023 --rc genhtml_legend=1 00:06:19.023 --rc geninfo_all_blocks=1 00:06:19.023 --rc geninfo_unexecuted_blocks=1 00:06:19.023 00:06:19.023 ' 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.023 --rc genhtml_branch_coverage=1 00:06:19.023 --rc genhtml_function_coverage=1 00:06:19.023 --rc genhtml_legend=1 00:06:19.023 --rc geninfo_all_blocks=1 00:06:19.023 --rc geninfo_unexecuted_blocks=1 00:06:19.023 00:06:19.023 ' 00:06:19.023 18:19:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.024 --rc genhtml_branch_coverage=1 00:06:19.024 --rc genhtml_function_coverage=1 00:06:19.024 --rc genhtml_legend=1 00:06:19.024 --rc geninfo_all_blocks=1 00:06:19.024 --rc geninfo_unexecuted_blocks=1 00:06:19.024 00:06:19.024 ' 00:06:19.024 18:19:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:19.024 18:19:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71682 00:06:19.024 18:19:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71682 00:06:19.024 18:19:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71682 ']' 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.024 18:19:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.024 [2024-11-29 18:19:38.857474] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:19.024 [2024-11-29 18:19:38.857596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71682 ] 00:06:19.284 [2024-11-29 18:19:39.008229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.284 [2024-11-29 18:19:39.026868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.853 18:19:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.853 18:19:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:19.853 18:19:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.114 { 00:06:20.114 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:20.114 "fields": { 00:06:20.114 "major": 25, 00:06:20.114 "minor": 1, 00:06:20.114 "patch": 0, 00:06:20.114 "suffix": "-pre", 00:06:20.114 "commit": "35cd3e84d" 00:06:20.114 } 00:06:20.114 } 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.114 18:19:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.114 18:19:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.403 request: 00:06:20.403 { 00:06:20.403 "method": "env_dpdk_get_mem_stats", 00:06:20.403 "req_id": 1 00:06:20.403 } 00:06:20.403 Got JSON-RPC error response 00:06:20.403 response: 00:06:20.403 { 00:06:20.403 "code": -32601, 00:06:20.403 "message": "Method not found" 00:06:20.403 } 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.403 18:19:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71682 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71682 ']' 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71682 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71682 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.403 killing process with pid 71682 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71682' 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 71682 00:06:20.403 18:19:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 71682 00:06:20.991 00:06:20.991 real 0m1.941s 00:06:20.991 user 0m2.357s 00:06:20.991 sys 0m0.415s 00:06:20.991 18:19:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.991 ************************************ 00:06:20.991 END TEST app_cmdline 00:06:20.991 ************************************ 00:06:20.991 18:19:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.991 18:19:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:20.991 18:19:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.991 18:19:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.991 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:20.991 ************************************ 00:06:20.991 START TEST version 00:06:20.991 ************************************ 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:20.991 * Looking for test storage... 00:06:20.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.991 18:19:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.991 18:19:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.991 18:19:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.991 18:19:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.991 18:19:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.991 18:19:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.991 18:19:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.991 18:19:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.991 18:19:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.991 18:19:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.991 18:19:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.991 18:19:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:20.991 18:19:40 version -- scripts/common.sh@345 -- # : 1 00:06:20.991 18:19:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.991 18:19:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.991 18:19:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:20.991 18:19:40 version -- scripts/common.sh@353 -- # local d=1 00:06:20.991 18:19:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.991 18:19:40 version -- scripts/common.sh@355 -- # echo 1 00:06:20.991 18:19:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.991 18:19:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:20.991 18:19:40 version -- scripts/common.sh@353 -- # local d=2 00:06:20.991 18:19:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.991 18:19:40 version -- scripts/common.sh@355 -- # echo 2 00:06:20.991 18:19:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.991 18:19:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.991 18:19:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.991 18:19:40 version -- scripts/common.sh@368 -- # return 0 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.991 18:19:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.991 --rc genhtml_branch_coverage=1 00:06:20.991 --rc genhtml_function_coverage=1 00:06:20.991 --rc genhtml_legend=1 00:06:20.991 --rc geninfo_all_blocks=1 00:06:20.991 --rc geninfo_unexecuted_blocks=1 00:06:20.991 00:06:20.992 ' 00:06:20.992 18:19:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.992 --rc genhtml_branch_coverage=1 00:06:20.992 --rc genhtml_function_coverage=1 00:06:20.992 --rc genhtml_legend=1 00:06:20.992 --rc geninfo_all_blocks=1 00:06:20.992 --rc geninfo_unexecuted_blocks=1 00:06:20.992 00:06:20.992 ' 00:06:20.992 18:19:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.992 --rc genhtml_branch_coverage=1 00:06:20.992 --rc genhtml_function_coverage=1 00:06:20.992 --rc genhtml_legend=1 00:06:20.992 --rc geninfo_all_blocks=1 00:06:20.992 --rc geninfo_unexecuted_blocks=1 00:06:20.992 00:06:20.992 ' 00:06:20.992 18:19:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.992 --rc genhtml_branch_coverage=1 00:06:20.992 --rc genhtml_function_coverage=1 00:06:20.992 --rc genhtml_legend=1 00:06:20.992 --rc geninfo_all_blocks=1 00:06:20.992 --rc geninfo_unexecuted_blocks=1 00:06:20.992 00:06:20.992 ' 00:06:20.992 18:19:40 version -- app/version.sh@17 -- # get_header_version major 00:06:20.992 18:19:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # cut -f2 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.992 18:19:40 version -- app/version.sh@17 -- # major=25 00:06:20.992 18:19:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:20.992 18:19:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # cut -f2 00:06:20.992 18:19:40 version -- app/version.sh@18 -- # minor=1 00:06:20.992 18:19:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:20.992 18:19:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # cut -f2 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.992 18:19:40 version -- app/version.sh@19 -- # patch=0 00:06:20.992 18:19:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:20.992 18:19:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.992 18:19:40 version -- app/version.sh@14 -- # cut -f2 00:06:20.992 18:19:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:20.992 18:19:40 version -- app/version.sh@22 -- # version=25.1 00:06:20.992 18:19:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:20.992 18:19:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:20.992 18:19:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:20.992 18:19:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:20.992 18:19:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:20.992 18:19:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:20.992 00:06:20.992 real 0m0.214s 00:06:20.992 user 0m0.131s 00:06:20.992 sys 0m0.104s 00:06:20.992 18:19:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.992 ************************************ 00:06:20.992 END TEST version 00:06:20.992 ************************************ 00:06:20.992 18:19:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:21.253 18:19:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:21.253 18:19:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:21.253 18:19:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:21.253 18:19:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:21.253 18:19:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.253 18:19:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.253 18:19:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:21.253 18:19:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:21.253 18:19:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.253 18:19:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.253 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.253 ************************************ 00:06:21.253 START TEST blockdev_nvme 00:06:21.253 ************************************ 00:06:21.253 18:19:40 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:21.253 * Looking for test storage... 00:06:21.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.253 18:19:41 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.253 --rc genhtml_branch_coverage=1 00:06:21.253 --rc genhtml_function_coverage=1 00:06:21.253 --rc genhtml_legend=1 00:06:21.253 --rc geninfo_all_blocks=1 00:06:21.253 --rc geninfo_unexecuted_blocks=1 00:06:21.253 00:06:21.253 ' 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.253 --rc genhtml_branch_coverage=1 00:06:21.253 --rc genhtml_function_coverage=1 00:06:21.253 --rc genhtml_legend=1 00:06:21.253 --rc geninfo_all_blocks=1 00:06:21.253 --rc geninfo_unexecuted_blocks=1 00:06:21.253 00:06:21.253 ' 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.253 --rc genhtml_branch_coverage=1 00:06:21.253 --rc genhtml_function_coverage=1 00:06:21.253 --rc genhtml_legend=1 00:06:21.253 --rc geninfo_all_blocks=1 00:06:21.253 --rc geninfo_unexecuted_blocks=1 00:06:21.253 00:06:21.253 ' 00:06:21.253 18:19:41 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.253 --rc genhtml_branch_coverage=1 00:06:21.253 --rc genhtml_function_coverage=1 00:06:21.253 --rc genhtml_legend=1 00:06:21.253 --rc geninfo_all_blocks=1 00:06:21.253 --rc geninfo_unexecuted_blocks=1 00:06:21.253 00:06:21.253 ' 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.253 18:19:41 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:21.253 18:19:41 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71854 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71854 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71854 ']' 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.254 18:19:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 18:19:41 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:21.515 [2024-11-29 18:19:41.180207] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:21.515 [2024-11-29 18:19:41.180367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71854 ] 00:06:21.515 [2024-11-29 18:19:41.344051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.515 [2024-11-29 18:19:41.373524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.457 18:19:42 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.457 18:19:42 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.457 18:19:42 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:22.457 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.457 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:22.719 18:19:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3aa63bee-a9f5-4e1d-9f52-167d895a36ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3aa63bee-a9f5-4e1d-9f52-167d895a36ea",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ac84ddae-655e-4465-874e-1f29f7e12ce1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ac84ddae-655e-4465-874e-1f29f7e12ce1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ca20156f-0dda-4db4-bd30-4ec82d90d1fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca20156f-0dda-4db4-bd30-4ec82d90d1fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7c592740-55f7-4ad3-a112-279e9b9c7cbd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c592740-55f7-4ad3-a112-279e9b9c7cbd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "84f27a1f-a1e5-4e50-8838-3c76d8e918dc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "84f27a1f-a1e5-4e50-8838-3c76d8e918dc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5dbcf215-2e48-47a5-a71a-5ad57446791c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5dbcf215-2e48-47a5-a71a-5ad57446791c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:22.720 18:19:42 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71854 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71854 ']' 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71854 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71854 00:06:22.720 killing process with pid 71854 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71854' 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71854 00:06:22.720 18:19:42 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71854 00:06:23.294 18:19:42 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.294 18:19:42 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.294 18:19:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:23.294 18:19:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.294 18:19:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.294 ************************************ 00:06:23.294 START TEST bdev_hello_world 00:06:23.294 ************************************ 00:06:23.294 18:19:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.294 [2024-11-29 18:19:43.030420] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:23.294 [2024-11-29 18:19:43.030645] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71916 ] 00:06:23.294 [2024-11-29 18:19:43.192778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.555 [2024-11-29 18:19:43.222792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.816 [2024-11-29 18:19:43.629540] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:23.816 [2024-11-29 18:19:43.629612] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:23.816 [2024-11-29 18:19:43.629638] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:23.816 [2024-11-29 18:19:43.632022] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:23.816 [2024-11-29 18:19:43.633093] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:23.816 [2024-11-29 18:19:43.633134] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:23.816 [2024-11-29 18:19:43.634327] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:23.816 00:06:23.816 [2024-11-29 18:19:43.634375] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:24.078 00:06:24.078 real 0m0.886s 00:06:24.078 user 0m0.568s 00:06:24.078 sys 0m0.213s 00:06:24.078 18:19:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.078 ************************************ 00:06:24.078 END TEST bdev_hello_world 00:06:24.078 ************************************ 00:06:24.078 18:19:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 18:19:43 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:24.078 18:19:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.078 18:19:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.078 18:19:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 ************************************ 00:06:24.078 START TEST bdev_bounds 00:06:24.078 ************************************ 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71947 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.078 Process bdevio pid: 71947 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71947' 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71947 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71947 ']' 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.078 18:19:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:24.337 [2024-11-29 18:19:43.988418] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:24.337 [2024-11-29 18:19:43.988597] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:06:24.337 [2024-11-29 18:19:44.151347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.337 [2024-11-29 18:19:44.180221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.337 [2024-11-29 18:19:44.180445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.337 [2024-11-29 18:19:44.180474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.277 18:19:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.277 18:19:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:25.277 18:19:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:25.277 I/O targets: 00:06:25.277 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:25.277 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:25.277 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:25.277 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:25.277 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:25.277 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:25.277 00:06:25.277 00:06:25.277 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.277 http://cunit.sourceforge.net/ 00:06:25.277 00:06:25.277 00:06:25.277 Suite: bdevio tests on: Nvme3n1 00:06:25.277 Test: blockdev write read block ...passed 00:06:25.277 Test: blockdev write zeroes read block ...passed 00:06:25.277 Test: blockdev write zeroes read no split ...passed 00:06:25.277 Test: blockdev write zeroes read split ...passed 00:06:25.277 Test: blockdev write zeroes read split partial ...passed 00:06:25.277 Test: blockdev reset ...[2024-11-29 18:19:44.988731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:25.277 [2024-11-29 18:19:44.991395] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:25.277 passed 00:06:25.277 Test: blockdev write read 8 blocks ...passed 00:06:25.277 Test: blockdev write read size > 128k ...passed 00:06:25.277 Test: blockdev write read invalid size ...passed 00:06:25.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.277 Test: blockdev write read max offset ...passed 00:06:25.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.277 Test: blockdev writev readv 8 blocks ...passed 00:06:25.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.277 Test: blockdev writev readv block ...passed 00:06:25.277 Test: blockdev writev readv size > 128k ...passed 00:06:25.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.277 Test: blockdev comparev and writev ...[2024-11-29 18:19:45.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d460e000 len:0x1000 00:06:25.277 [2024-11-29 18:19:45.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev nvme passthru rw ...passed 00:06:25.277 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:19:45.014580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.277 [2024-11-29 18:19:45.014624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev nvme admin passthru ...passed 00:06:25.277 Test: blockdev copy ...passed 00:06:25.277 Suite: bdevio tests on: Nvme2n3 00:06:25.277 Test: blockdev write read block ...passed 00:06:25.277 Test: blockdev write zeroes read block ...passed 00:06:25.277 Test: blockdev write zeroes read no split ...passed 00:06:25.277 Test: blockdev write zeroes read split ...passed 00:06:25.277 Test: blockdev write zeroes read split partial ...passed 00:06:25.277 Test: blockdev reset ...[2024-11-29 18:19:45.045419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:25.277 [2024-11-29 18:19:45.049636] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:25.277 passed 00:06:25.277 Test: blockdev write read 8 blocks ...passed 00:06:25.277 Test: blockdev write read size > 128k ...passed 00:06:25.277 Test: blockdev write read invalid size ...passed 00:06:25.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.277 Test: blockdev write read max offset ...passed 00:06:25.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.277 Test: blockdev writev readv 8 blocks ...passed 00:06:25.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.277 Test: blockdev writev readv block ...passed 00:06:25.277 Test: blockdev writev readv size > 128k ...passed 00:06:25.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.277 Test: blockdev comparev and writev ...[2024-11-29 18:19:45.068820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4606000 len:0x1000 00:06:25.277 [2024-11-29 18:19:45.068877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev nvme passthru rw ...passed 00:06:25.277 Test: blockdev nvme passthru vendor specific ...passed 00:06:25.277 Test: blockdev nvme admin passthru ...[2024-11-29 18:19:45.072102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.277 [2024-11-29 18:19:45.072144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev copy ...passed 00:06:25.277 Suite: bdevio tests on: Nvme2n2 00:06:25.277 Test: blockdev write read block ...passed 00:06:25.277 Test: blockdev write zeroes read block ...passed 00:06:25.277 Test: blockdev write zeroes read no split ...passed 00:06:25.277 Test: blockdev write zeroes read split ...passed 00:06:25.277 Test: blockdev write zeroes read split partial ...passed 00:06:25.277 Test: blockdev reset ...[2024-11-29 18:19:45.098623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:25.277 [2024-11-29 18:19:45.103529] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:25.277 Test: blockdev write read 8 blocks ...uccessful. 00:06:25.277 passed 00:06:25.277 Test: blockdev write read size > 128k ...passed 00:06:25.277 Test: blockdev write read invalid size ...passed 00:06:25.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.277 Test: blockdev write read max offset ...passed 00:06:25.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.277 Test: blockdev writev readv 8 blocks ...passed 00:06:25.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.277 Test: blockdev writev readv block ...passed 00:06:25.277 Test: blockdev writev readv size > 128k ...passed 00:06:25.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.277 Test: blockdev comparev and writev ...[2024-11-29 18:19:45.123513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4608000 len:0x1000 00:06:25.277 [2024-11-29 18:19:45.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev nvme passthru rw ...passed 00:06:25.277 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:19:45.126782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.277 [2024-11-29 18:19:45.126824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.277 passed 00:06:25.277 Test: blockdev nvme admin passthru ...passed 00:06:25.277 Test: blockdev copy ...passed 00:06:25.277 Suite: bdevio tests on: Nvme2n1 00:06:25.277 Test: blockdev write read block ...passed 00:06:25.277 Test: blockdev write zeroes read block ...passed 00:06:25.277 Test: blockdev write zeroes read no split ...passed 00:06:25.277 Test: blockdev write zeroes read split ...passed 00:06:25.277 Test: blockdev write zeroes read split partial ...passed 00:06:25.277 Test: blockdev reset ...[2024-11-29 18:19:45.153624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:25.277 [2024-11-29 18:19:45.156794] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:25.277 Test: blockdev write read 8 blocks ...uccessful. 00:06:25.277 passed 00:06:25.277 Test: blockdev write read size > 128k ...passed 00:06:25.277 Test: blockdev write read invalid size ...passed 00:06:25.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.277 Test: blockdev write read max offset ...passed 00:06:25.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.278 Test: blockdev writev readv 8 blocks ...passed 00:06:25.278 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.278 Test: blockdev writev readv block ...passed 00:06:25.278 Test: blockdev writev readv size > 128k ...passed 00:06:25.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.278 Test: blockdev comparev and writev ...[2024-11-29 18:19:45.178924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4204000 len:0x1000 00:06:25.278 [2024-11-29 18:19:45.179140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.278 passed 00:06:25.538 Test: blockdev nvme passthru rw ...passed 00:06:25.538 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:19:45.181982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.538 [2024-11-29 18:19:45.182051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.538 passed 00:06:25.538 Test: blockdev nvme admin passthru ...passed 00:06:25.538 Test: blockdev copy ...passed 00:06:25.538 Suite: bdevio tests on: Nvme1n1 00:06:25.538 Test: blockdev write read block ...passed 00:06:25.538 Test: blockdev write zeroes read block ...passed 00:06:25.538 Test: blockdev write zeroes read no split ...passed 00:06:25.538 Test: blockdev write zeroes read split ...passed 00:06:25.538 Test: blockdev write zeroes read split partial ...passed 00:06:25.538 Test: blockdev reset ...[2024-11-29 18:19:45.209010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:25.538 [2024-11-29 18:19:45.211615] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:25.538 passed 00:06:25.538 Test: blockdev write read 8 blocks ...passed 00:06:25.538 Test: blockdev write read size > 128k ...passed 00:06:25.539 Test: blockdev write read invalid size ...passed 00:06:25.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.539 Test: blockdev write read max offset ...passed 00:06:25.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.539 Test: blockdev writev readv 8 blocks ...passed 00:06:25.539 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.539 Test: blockdev writev readv block ...passed 00:06:25.539 Test: blockdev writev readv size > 128k ...passed 00:06:25.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.539 Test: blockdev comparev and writev ...[2024-11-29 18:19:45.232115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:25.539 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2eb63d000 len:0x1000 00:06:25.539 [2024-11-29 18:19:45.232503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.539 passed 00:06:25.539 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:19:45.235284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.539 [2024-11-29 18:19:45.235630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:06:25.539 00:06:25.539 Test: blockdev nvme admin passthru ...passed 00:06:25.539 Test: blockdev copy ...passed 00:06:25.539 Suite: bdevio tests on: Nvme0n1 00:06:25.539 Test: blockdev write read block ...passed 00:06:25.539 Test: blockdev write zeroes read block ...passed 00:06:25.539 Test: blockdev write zeroes read no split ...passed 00:06:25.539 Test: blockdev write zeroes read split ...passed 00:06:25.539 Test: blockdev write zeroes read split partial ...passed 00:06:25.539 Test: blockdev reset ...[2024-11-29 18:19:45.266754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:25.539 passed 00:06:25.539 Test: blockdev write read 8 blocks ...[2024-11-29 18:19:45.270205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:25.539 passed 00:06:25.539 Test: blockdev write read size > 128k ...passed 00:06:25.539 Test: blockdev write read invalid size ...passed 00:06:25.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.539 Test: blockdev write read max offset ...passed 00:06:25.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.539 Test: blockdev writev readv 8 blocks ...passed 00:06:25.539 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.539 Test: blockdev writev readv block ...passed 00:06:25.539 Test: blockdev writev readv size > 128k ...passed 00:06:25.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.539 Test: blockdev comparev and writev ...passed 00:06:25.539 Test: blockdev nvme passthru rw ...[2024-11-29 18:19:45.287890] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:25.539 separate metadata which is not supported yet. 00:06:25.539 passed 00:06:25.539 Test: blockdev nvme passthru vendor specific ...passed 00:06:25.539 Test: blockdev nvme admin passthru ...[2024-11-29 18:19:45.290114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:25.539 [2024-11-29 18:19:45.290206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:25.539 passed 00:06:25.539 Test: blockdev copy ...passed 00:06:25.539 00:06:25.539 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.539 suites 6 6 n/a 0 0 00:06:25.539 tests 138 138 138 0 0 00:06:25.539 asserts 893 893 893 0 n/a 00:06:25.539 00:06:25.539 Elapsed time = 0.752 seconds 00:06:25.539 0 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71947 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71947 ']' 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71947 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71947 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71947' 00:06:25.539 killing process with pid 71947 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71947 00:06:25.539 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71947 00:06:25.799 18:19:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:25.799 00:06:25.799 real 0m1.636s 00:06:25.799 user 0m4.068s 00:06:25.799 sys 0m0.320s 00:06:25.799 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.799 ************************************ 00:06:25.799 END TEST bdev_bounds 00:06:25.799 18:19:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:25.799 ************************************ 00:06:25.799 18:19:45 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:25.799 18:19:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:25.799 18:19:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.799 18:19:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.799 ************************************ 00:06:25.799 START TEST bdev_nbd 00:06:25.799 ************************************ 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:25.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72001 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72001 /var/tmp/spdk-nbd.sock 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72001 ']' 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:25.799 18:19:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:26.060 [2024-11-29 18:19:45.703008] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:26.060 [2024-11-29 18:19:45.703179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.060 [2024-11-29 18:19:45.867801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.060 [2024-11-29 18:19:45.900996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.003 1+0 records in 00:06:27.003 1+0 records out 00:06:27.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011721 s, 3.5 MB/s 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.003 18:19:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.261 1+0 records in 00:06:27.261 1+0 records out 00:06:27.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00180921 s, 2.3 MB/s 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.261 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.519 1+0 records in 00:06:27.519 1+0 records out 00:06:27.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102804 s, 4.0 MB/s 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.519 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.778 1+0 records in 00:06:27.778 1+0 records out 00:06:27.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652242 s, 6.3 MB/s 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.778 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.038 1+0 records in 00:06:28.038 1+0 records out 00:06:28.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720182 s, 5.7 MB/s 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.038 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.039 18:19:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.299 1+0 records in 00:06:28.299 1+0 records out 00:06:28.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100931 s, 4.1 MB/s 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.299 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd0", 00:06:28.560 "bdev_name": "Nvme0n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd1", 00:06:28.560 "bdev_name": "Nvme1n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd2", 00:06:28.560 "bdev_name": "Nvme2n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd3", 00:06:28.560 "bdev_name": "Nvme2n2" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd4", 00:06:28.560 "bdev_name": "Nvme2n3" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd5", 00:06:28.560 "bdev_name": "Nvme3n1" 00:06:28.560 } 00:06:28.560 ]' 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd0", 00:06:28.560 "bdev_name": "Nvme0n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd1", 00:06:28.560 "bdev_name": "Nvme1n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd2", 00:06:28.560 "bdev_name": "Nvme2n1" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd3", 00:06:28.560 "bdev_name": "Nvme2n2" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd4", 00:06:28.560 "bdev_name": "Nvme2n3" 00:06:28.560 }, 00:06:28.560 { 00:06:28.560 "nbd_device": "/dev/nbd5", 00:06:28.560 "bdev_name": "Nvme3n1" 00:06:28.560 } 00:06:28.560 ]' 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.560 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.821 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.080 18:19:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.340 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.600 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.878 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.186 18:19:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:30.186 /dev/nbd0 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.447 1+0 records in 00:06:30.447 1+0 records out 00:06:30.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102605 s, 4.0 MB/s 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:30.447 /dev/nbd1 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.447 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.708 1+0 records in 00:06:30.708 1+0 records out 00:06:30.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898834 s, 4.6 MB/s 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:30.708 /dev/nbd10 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.708 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.709 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.709 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.709 1+0 records in 00:06:30.709 1+0 records out 00:06:30.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119735 s, 3.4 MB/s 00:06:30.709 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:30.970 /dev/nbd11 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.970 1+0 records in 00:06:30.970 1+0 records out 00:06:30.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100125 s, 4.1 MB/s 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.970 18:19:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:31.231 /dev/nbd12 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.231 1+0 records in 00:06:31.231 1+0 records out 00:06:31.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994637 s, 4.1 MB/s 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.231 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:31.492 /dev/nbd13 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.492 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.492 1+0 records in 00:06:31.492 1+0 records out 00:06:31.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000952459 s, 4.3 MB/s 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.493 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.753 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd0", 00:06:31.753 "bdev_name": "Nvme0n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd1", 00:06:31.753 "bdev_name": "Nvme1n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd10", 00:06:31.753 "bdev_name": "Nvme2n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd11", 00:06:31.753 "bdev_name": "Nvme2n2" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd12", 00:06:31.753 "bdev_name": "Nvme2n3" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd13", 00:06:31.753 "bdev_name": "Nvme3n1" 00:06:31.753 } 00:06:31.753 ]' 00:06:31.753 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd0", 00:06:31.753 "bdev_name": "Nvme0n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd1", 00:06:31.753 "bdev_name": "Nvme1n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd10", 00:06:31.753 "bdev_name": "Nvme2n1" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd11", 00:06:31.753 "bdev_name": "Nvme2n2" 00:06:31.753 }, 00:06:31.753 { 00:06:31.753 "nbd_device": "/dev/nbd12", 00:06:31.753 "bdev_name": "Nvme2n3" 00:06:31.753 }, 00:06:31.753 { 00:06:31.754 "nbd_device": "/dev/nbd13", 00:06:31.754 "bdev_name": "Nvme3n1" 00:06:31.754 } 00:06:31.754 ]' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.754 /dev/nbd1 00:06:31.754 /dev/nbd10 00:06:31.754 /dev/nbd11 00:06:31.754 /dev/nbd12 00:06:31.754 /dev/nbd13' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.754 /dev/nbd1 00:06:31.754 /dev/nbd10 00:06:31.754 /dev/nbd11 00:06:31.754 /dev/nbd12 00:06:31.754 /dev/nbd13' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.754 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:31.754 256+0 records in 00:06:31.754 256+0 records out 00:06:31.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101684 s, 103 MB/s 00:06:32.015 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.015 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.015 256+0 records in 00:06:32.015 256+0 records out 00:06:32.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23726 s, 4.4 MB/s 00:06:32.015 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.015 18:19:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.276 256+0 records in 00:06:32.276 256+0 records out 00:06:32.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237595 s, 4.4 MB/s 00:06:32.276 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.276 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:32.537 256+0 records in 00:06:32.537 256+0 records out 00:06:32.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247179 s, 4.2 MB/s 00:06:32.537 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.537 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:32.797 256+0 records in 00:06:32.797 256+0 records out 00:06:32.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206896 s, 5.1 MB/s 00:06:32.797 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.797 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:33.056 256+0 records in 00:06:33.056 256+0 records out 00:06:33.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239157 s, 4.4 MB/s 00:06:33.056 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.056 18:19:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:33.317 256+0 records in 00:06:33.317 256+0 records out 00:06:33.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228297 s, 4.6 MB/s 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:33.317 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.318 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.580 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.842 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.101 18:19:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:34.360 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:34.618 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.619 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:34.878 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:34.879 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:35.140 malloc_lvol_verify 00:06:35.140 18:19:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:35.401 e335f7bb-d7b9-476a-a258-ef916e12b549 00:06:35.401 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:35.662 4a8b39dc-727b-4bd9-9674-023f5fcc9fb7 00:06:35.662 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:35.923 /dev/nbd0 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:35.923 mke2fs 1.47.0 (5-Feb-2023) 00:06:35.923 Discarding device blocks: 0/4096 done 00:06:35.923 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:35.923 00:06:35.923 Allocating group tables: 0/1 done 00:06:35.923 Writing inode tables: 0/1 done 00:06:35.923 Creating journal (1024 blocks): done 00:06:35.923 Writing superblocks and filesystem accounting information: 0/1 done 00:06:35.923 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.923 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72001 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72001 ']' 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72001 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72001 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.185 killing process with pid 72001 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72001' 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72001 00:06:36.185 18:19:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72001 00:06:36.448 18:19:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:36.448 00:06:36.448 real 0m10.513s 00:06:36.448 user 0m14.586s 00:06:36.448 sys 0m3.589s 00:06:36.448 18:19:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.448 18:19:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:36.448 ************************************ 00:06:36.448 END TEST bdev_nbd 00:06:36.448 ************************************ 00:06:36.448 18:19:56 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:36.448 18:19:56 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:36.448 skipping fio tests on NVMe due to multi-ns failures. 00:06:36.448 18:19:56 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:36.448 18:19:56 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:36.448 18:19:56 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:36.448 18:19:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:36.448 18:19:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.448 18:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.448 ************************************ 00:06:36.448 START TEST bdev_verify 00:06:36.448 ************************************ 00:06:36.448 18:19:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:36.448 [2024-11-29 18:19:56.280259] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:36.448 [2024-11-29 18:19:56.280416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72379 ] 00:06:36.709 [2024-11-29 18:19:56.439078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.710 [2024-11-29 18:19:56.471196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.710 [2024-11-29 18:19:56.471256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.281 Running I/O for 5 seconds... 00:06:39.592 17216.00 IOPS, 67.25 MiB/s [2024-11-29T18:20:00.430Z] 18816.00 IOPS, 73.50 MiB/s [2024-11-29T18:20:01.361Z] 19413.33 IOPS, 75.83 MiB/s [2024-11-29T18:20:02.292Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-29T18:20:02.292Z] 19353.60 IOPS, 75.60 MiB/s 00:06:42.387 Latency(us) 00:06:42.387 [2024-11-29T18:20:02.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.387 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0xbd0bd 00:06:42.387 Nvme0n1 : 5.08 1601.03 6.25 0.00 0.00 79501.53 9427.10 72593.72 00:06:42.387 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:42.387 Nvme0n1 : 5.08 1575.59 6.15 0.00 0.00 80781.95 11191.53 89128.96 00:06:42.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0xa0000 00:06:42.387 Nvme1n1 : 5.09 1608.46 6.28 0.00 0.00 79331.78 11645.24 68560.74 00:06:42.387 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0xa0000 length 0xa0000 00:06:42.387 Nvme1n1 : 5.09 1583.64 6.19 0.00 0.00 80321.39 11594.83 74610.22 00:06:42.387 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0x80000 00:06:42.387 Nvme2n1 : 5.09 1608.03 6.28 0.00 0.00 79266.26 11897.30 68157.44 00:06:42.387 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x80000 length 0x80000 00:06:42.387 Nvme2n1 : 5.10 1582.71 6.18 0.00 0.00 80180.26 13510.50 72997.02 00:06:42.387 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0x80000 00:06:42.387 Nvme2n2 : 5.10 1606.92 6.28 0.00 0.00 79138.63 13611.32 68157.44 00:06:42.387 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x80000 length 0x80000 00:06:42.387 Nvme2n2 : 5.10 1582.13 6.18 0.00 0.00 80075.64 13409.67 70173.93 00:06:42.387 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0x80000 00:06:42.387 Nvme2n3 : 5.10 1606.14 6.27 0.00 0.00 79037.24 14518.74 71383.83 00:06:42.387 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x80000 length 0x80000 00:06:42.387 Nvme2n3 : 5.10 1581.68 6.18 0.00 0.00 79966.39 13308.85 67754.14 00:06:42.387 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x0 length 0x20000 00:06:42.387 Nvme3n1 : 5.10 1605.37 6.27 0.00 0.00 78941.61 13611.32 73400.32 00:06:42.387 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.387 Verification LBA range: start 0x20000 length 0x20000 00:06:42.387 Nvme3n1 : 5.10 1580.91 6.18 0.00 0.00 79922.33 14317.10 69770.63 00:06:42.387 [2024-11-29T18:20:02.292Z] =================================================================================================================== 00:06:42.388 [2024-11-29T18:20:02.293Z] Total : 19122.61 74.70 0.00 0.00 79700.88 9427.10 89128.96 00:06:42.953 00:06:42.953 real 0m6.475s 00:06:42.953 user 0m12.161s 00:06:42.953 sys 0m0.257s 00:06:42.953 ************************************ 00:06:42.953 18:20:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.953 18:20:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.953 END TEST bdev_verify 00:06:42.953 ************************************ 00:06:42.953 18:20:02 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.953 18:20:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:42.953 18:20:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.953 18:20:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.953 ************************************ 00:06:42.953 START TEST bdev_verify_big_io 00:06:42.953 ************************************ 00:06:42.953 18:20:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.953 [2024-11-29 18:20:02.817056] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:42.953 [2024-11-29 18:20:02.817172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:06:43.211 [2024-11-29 18:20:02.973796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.211 [2024-11-29 18:20:02.994403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.211 [2024-11-29 18:20:02.994947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.776 Running I/O for 5 seconds... 00:06:48.951 747.00 IOPS, 46.69 MiB/s [2024-11-29T18:20:09.789Z] 2095.00 IOPS, 130.94 MiB/s [2024-11-29T18:20:09.789Z] 2492.33 IOPS, 155.77 MiB/s 00:06:49.884 Latency(us) 00:06:49.884 [2024-11-29T18:20:09.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0xbd0b 00:06:49.884 Nvme0n1 : 5.75 106.47 6.65 0.00 0.00 1134354.44 21677.29 1167952.34 00:06:49.884 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:49.884 Nvme0n1 : 5.73 111.61 6.98 0.00 0.00 1109293.21 27827.59 1180857.90 00:06:49.884 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0xa000 00:06:49.884 Nvme1n1 : 5.75 111.30 6.96 0.00 0.00 1069128.47 105664.20 993727.41 00:06:49.884 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0xa000 length 0xa000 00:06:49.884 Nvme1n1 : 5.74 111.53 6.97 0.00 0.00 1066985.94 104051.00 967916.31 00:06:49.884 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0x8000 00:06:49.884 Nvme2n1 : 5.88 112.95 7.06 0.00 0.00 1011747.68 124215.93 1025991.29 00:06:49.884 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x8000 length 0x8000 00:06:49.884 Nvme2n1 : 5.88 112.70 7.04 0.00 0.00 1013162.38 141154.46 967916.31 00:06:49.884 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0x8000 00:06:49.884 Nvme2n2 : 5.92 118.86 7.43 0.00 0.00 938453.32 41338.09 1051802.39 00:06:49.884 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x8000 length 0x8000 00:06:49.884 Nvme2n2 : 5.99 124.62 7.79 0.00 0.00 900264.88 17341.83 993727.41 00:06:49.884 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0x8000 00:06:49.884 Nvme2n3 : 5.99 128.15 8.01 0.00 0.00 844720.57 43757.88 1077613.49 00:06:49.884 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x8000 length 0x8000 00:06:49.884 Nvme2n3 : 5.99 124.06 7.75 0.00 0.00 870109.17 17241.01 1025991.29 00:06:49.884 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x0 length 0x2000 00:06:49.884 Nvme3n1 : 6.05 144.97 9.06 0.00 0.00 723438.28 2520.62 1103424.59 00:06:49.884 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.884 Verification LBA range: start 0x2000 length 0x2000 00:06:49.884 Nvme3n1 : 6.04 144.60 9.04 0.00 0.00 727992.38 957.83 1051802.39 00:06:49.884 [2024-11-29T18:20:09.789Z] =================================================================================================================== 00:06:49.884 [2024-11-29T18:20:09.789Z] Total : 1451.82 90.74 0.00 0.00 934901.28 957.83 1180857.90 00:06:50.493 00:06:50.493 real 0m7.589s 00:06:50.493 user 0m14.452s 00:06:50.493 sys 0m0.218s 00:06:50.493 18:20:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.493 ************************************ 00:06:50.493 END TEST bdev_verify_big_io 00:06:50.493 ************************************ 00:06:50.493 18:20:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:50.493 18:20:10 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.493 18:20:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:50.493 18:20:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.493 18:20:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.751 ************************************ 00:06:50.751 START TEST bdev_write_zeroes 00:06:50.751 ************************************ 00:06:50.751 18:20:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.751 [2024-11-29 18:20:10.463725] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:50.751 [2024-11-29 18:20:10.463861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72577 ] 00:06:50.751 [2024-11-29 18:20:10.620967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.751 [2024-11-29 18:20:10.642591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.316 Running I/O for 1 seconds... 00:06:52.248 61056.00 IOPS, 238.50 MiB/s 00:06:52.248 Latency(us) 00:06:52.248 [2024-11-29T18:20:12.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.248 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme0n1 : 1.02 10147.93 39.64 0.00 0.00 12588.50 9023.80 20366.57 00:06:52.248 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme1n1 : 1.02 10136.50 39.60 0.00 0.00 12587.42 9376.69 20064.10 00:06:52.248 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme2n1 : 1.02 10124.96 39.55 0.00 0.00 12570.10 8922.98 19055.85 00:06:52.248 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme2n2 : 1.03 10113.62 39.51 0.00 0.00 12551.70 7511.43 18854.20 00:06:52.248 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme2n3 : 1.03 10102.25 39.46 0.00 0.00 12531.09 6251.13 18955.03 00:06:52.248 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:52.248 Nvme3n1 : 1.03 10090.92 39.42 0.00 0.00 12524.87 5721.80 20568.22 00:06:52.248 [2024-11-29T18:20:12.153Z] =================================================================================================================== 00:06:52.248 [2024-11-29T18:20:12.153Z] Total : 60716.19 237.17 0.00 0.00 12558.95 5721.80 20568.22 00:06:52.509 00:06:52.509 real 0m1.837s 00:06:52.509 user 0m1.559s 00:06:52.509 sys 0m0.166s 00:06:52.509 18:20:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.509 ************************************ 00:06:52.509 END TEST bdev_write_zeroes 00:06:52.509 ************************************ 00:06:52.509 18:20:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 18:20:12 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:52.509 18:20:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:52.509 18:20:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.509 18:20:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 ************************************ 00:06:52.509 START TEST bdev_json_nonenclosed 00:06:52.509 ************************************ 00:06:52.509 18:20:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:52.509 [2024-11-29 18:20:12.358392] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:52.509 [2024-11-29 18:20:12.358531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72618 ] 00:06:52.768 [2024-11-29 18:20:12.516979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.768 [2024-11-29 18:20:12.536542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.768 [2024-11-29 18:20:12.536610] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:52.768 [2024-11-29 18:20:12.536625] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:52.768 [2024-11-29 18:20:12.536636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.768 00:06:52.768 real 0m0.305s 00:06:52.768 user 0m0.117s 00:06:52.768 sys 0m0.084s 00:06:52.768 18:20:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.768 18:20:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:52.768 ************************************ 00:06:52.768 END TEST bdev_json_nonenclosed 00:06:52.768 ************************************ 00:06:52.768 18:20:12 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:52.768 18:20:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:52.768 18:20:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.768 18:20:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.768 ************************************ 00:06:52.768 START TEST bdev_json_nonarray 00:06:52.768 ************************************ 00:06:52.768 18:20:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:53.027 [2024-11-29 18:20:12.716294] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:53.027 [2024-11-29 18:20:12.716408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:06:53.027 [2024-11-29 18:20:12.875341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.027 [2024-11-29 18:20:12.894763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.027 [2024-11-29 18:20:12.894846] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:53.027 [2024-11-29 18:20:12.894861] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:53.027 [2024-11-29 18:20:12.894872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.285 00:06:53.285 real 0m0.300s 00:06:53.285 user 0m0.115s 00:06:53.285 sys 0m0.081s 00:06:53.285 18:20:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.285 ************************************ 00:06:53.285 END TEST bdev_json_nonarray 00:06:53.285 18:20:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:53.285 ************************************ 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:53.285 18:20:13 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:53.285 ************************************ 00:06:53.285 END TEST blockdev_nvme 00:06:53.285 ************************************ 00:06:53.285 00:06:53.285 real 0m32.074s 00:06:53.285 user 0m49.704s 00:06:53.285 sys 0m5.761s 00:06:53.285 18:20:13 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.285 18:20:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.285 18:20:13 -- spdk/autotest.sh@209 -- # uname -s 00:06:53.285 18:20:13 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:53.285 18:20:13 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:53.285 18:20:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.285 18:20:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.285 18:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:53.285 ************************************ 00:06:53.285 START TEST blockdev_nvme_gpt 00:06:53.285 ************************************ 00:06:53.285 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:53.285 * Looking for test storage... 00:06:53.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:53.285 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.285 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.285 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.544 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.544 18:20:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:53.544 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.544 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.544 --rc genhtml_branch_coverage=1 00:06:53.544 --rc genhtml_function_coverage=1 00:06:53.544 --rc genhtml_legend=1 00:06:53.544 --rc geninfo_all_blocks=1 00:06:53.544 --rc geninfo_unexecuted_blocks=1 00:06:53.544 00:06:53.544 ' 00:06:53.544 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.544 --rc genhtml_branch_coverage=1 00:06:53.544 --rc genhtml_function_coverage=1 00:06:53.544 --rc genhtml_legend=1 00:06:53.544 --rc geninfo_all_blocks=1 00:06:53.544 --rc geninfo_unexecuted_blocks=1 00:06:53.544 00:06:53.544 ' 00:06:53.544 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.544 --rc genhtml_branch_coverage=1 00:06:53.544 --rc genhtml_function_coverage=1 00:06:53.544 --rc genhtml_legend=1 00:06:53.544 --rc geninfo_all_blocks=1 00:06:53.544 --rc geninfo_unexecuted_blocks=1 00:06:53.544 00:06:53.544 ' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.545 --rc genhtml_branch_coverage=1 00:06:53.545 --rc genhtml_function_coverage=1 00:06:53.545 --rc genhtml_legend=1 00:06:53.545 --rc geninfo_all_blocks=1 00:06:53.545 --rc geninfo_unexecuted_blocks=1 00:06:53.545 00:06:53.545 ' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72714 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72714 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72714 ']' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.545 18:20:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.545 [2024-11-29 18:20:13.293170] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:53.545 [2024-11-29 18:20:13.293289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72714 ] 00:06:53.545 [2024-11-29 18:20:13.443148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.805 [2024-11-29 18:20:13.462745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.370 18:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.370 18:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:54.370 18:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:54.370 18:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:54.370 18:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:54.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.886 Waiting for block devices as requested 00:06:54.886 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.886 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.886 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.145 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.420 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:00.420 BYT; 00:07:00.420 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:00.420 BYT; 00:07:00.420 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:00.420 18:20:19 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:00.420 18:20:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:01.351 The operation has completed successfully. 00:07:01.351 18:20:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:02.286 The operation has completed successfully. 00:07:02.286 18:20:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:02.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.111 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.111 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:03.446 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.446 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.446 [] 00:07:03.446 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.446 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:03.446 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.446 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:03.705 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:03.706 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "de5078d0-e071-41a2-b82c-0c1ca61c7cd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "de5078d0-e071-41a2-b82c-0c1ca61c7cd9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "560b4636-5a99-44b3-b982-31bea39712f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "560b4636-5a99-44b3-b982-31bea39712f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1fa49288-1902-46f2-9395-3444bea0303e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1fa49288-1902-46f2-9395-3444bea0303e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5ee0d56f-63cb-4cc8-b1cc-ed6ed3eb7dd7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5ee0d56f-63cb-4cc8-b1cc-ed6ed3eb7dd7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2c0f8f02-4497-4e87-8815-3d3528e5e6ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c0f8f02-4497-4e87-8815-3d3528e5e6ad",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:03.706 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:03.706 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:03.706 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:03.706 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72714 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72714 ']' 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72714 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72714 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.706 killing process with pid 72714 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72714' 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72714 00:07:03.706 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72714 00:07:03.964 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.964 18:20:23 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.964 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:03.964 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.964 18:20:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.964 ************************************ 00:07:03.964 START TEST bdev_hello_world 00:07:03.964 ************************************ 00:07:03.964 18:20:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.964 [2024-11-29 18:20:23.841801] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:03.964 [2024-11-29 18:20:23.841918] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73331 ] 00:07:04.222 [2024-11-29 18:20:24.000140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.222 [2024-11-29 18:20:24.019021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.789 [2024-11-29 18:20:24.388698] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:04.789 [2024-11-29 18:20:24.388751] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:04.789 [2024-11-29 18:20:24.388769] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:04.789 [2024-11-29 18:20:24.390822] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:04.789 [2024-11-29 18:20:24.392054] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:04.789 [2024-11-29 18:20:24.392085] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:04.789 [2024-11-29 18:20:24.392477] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:04.789 00:07:04.789 [2024-11-29 18:20:24.392494] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:04.789 00:07:04.789 real 0m0.759s 00:07:04.789 user 0m0.508s 00:07:04.789 sys 0m0.149s 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.789 ************************************ 00:07:04.789 END TEST bdev_hello_world 00:07:04.789 ************************************ 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:04.789 18:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:04.789 18:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.789 18:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.789 18:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.789 ************************************ 00:07:04.789 START TEST bdev_bounds 00:07:04.789 ************************************ 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:04.789 Process bdevio pid: 73357 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73357 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73357' 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73357 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73357 ']' 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.789 18:20:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:04.789 [2024-11-29 18:20:24.667266] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:04.789 [2024-11-29 18:20:24.667376] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73357 ] 00:07:05.049 [2024-11-29 18:20:24.825366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.049 [2024-11-29 18:20:24.847527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.049 [2024-11-29 18:20:24.847610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.049 [2024-11-29 18:20:24.847936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.619 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.619 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:05.619 18:20:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:05.879 I/O targets: 00:07:05.879 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:05.879 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:05.879 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:05.879 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.879 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.880 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.880 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:05.880 00:07:05.880 00:07:05.880 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.880 http://cunit.sourceforge.net/ 00:07:05.880 00:07:05.880 00:07:05.880 Suite: bdevio tests on: Nvme3n1 00:07:05.880 Test: blockdev write read block ...passed 00:07:05.880 Test: blockdev write zeroes read block ...passed 00:07:05.880 Test: blockdev write zeroes read no split ...passed 00:07:05.880 Test: blockdev write zeroes read split ...passed 00:07:05.880 Test: blockdev write zeroes read split partial ...passed 00:07:05.880 Test: blockdev reset ...[2024-11-29 18:20:25.614687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:05.880 passed 00:07:05.880 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.616258] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:05.880 passed 00:07:05.880 Test: blockdev write read size > 128k ...passed 00:07:05.880 Test: blockdev write read invalid size ...passed 00:07:05.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.880 Test: blockdev write read max offset ...passed 00:07:05.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.880 Test: blockdev writev readv 8 blocks ...passed 00:07:05.880 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.880 Test: blockdev writev readv block ...passed 00:07:05.880 Test: blockdev writev readv size > 128k ...passed 00:07:05.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.880 Test: blockdev comparev and writev ...[2024-11-29 18:20:25.620608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d300e000 len:0x1000 00:07:05.880 [2024-11-29 18:20:25.620653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev nvme passthru rw ...passed 00:07:05.880 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:20:25.621102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:05.880 passed 00:07:05.880 Test: blockdev nvme admin passthru ...[2024-11-29 18:20:25.621133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev copy ...passed 00:07:05.880 Suite: bdevio tests on: Nvme2n3 00:07:05.880 Test: blockdev write read block ...passed 00:07:05.880 Test: blockdev write zeroes read block ...passed 00:07:05.880 Test: blockdev write zeroes read no split ...passed 00:07:05.880 Test: blockdev write zeroes read split ...passed 00:07:05.880 Test: blockdev write zeroes read split partial ...passed 00:07:05.880 Test: blockdev reset ...[2024-11-29 18:20:25.635853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:05.880 passed 00:07:05.880 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.637633] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:05.880 passed 00:07:05.880 Test: blockdev write read size > 128k ...passed 00:07:05.880 Test: blockdev write read invalid size ...passed 00:07:05.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.880 Test: blockdev write read max offset ...passed 00:07:05.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.880 Test: blockdev writev readv 8 blocks ...passed 00:07:05.880 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.880 Test: blockdev writev readv block ...passed 00:07:05.880 Test: blockdev writev readv size > 128k ...passed 00:07:05.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.880 Test: blockdev comparev and writev ...[2024-11-29 18:20:25.641044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3006000 len:0x1000 00:07:05.880 [2024-11-29 18:20:25.641081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev nvme passthru rw ...passed 00:07:05.880 Test: blockdev nvme passthru vendor specific ...passed 00:07:05.880 Test: blockdev nvme admin passthru ...[2024-11-29 18:20:25.641593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:05.880 [2024-11-29 18:20:25.641616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev copy ...passed 00:07:05.880 Suite: bdevio tests on: Nvme2n2 00:07:05.880 Test: blockdev write read block ...passed 00:07:05.880 Test: blockdev write zeroes read block ...passed 00:07:05.880 Test: blockdev write zeroes read no split ...passed 00:07:05.880 Test: blockdev write zeroes read split ...passed 00:07:05.880 Test: blockdev write zeroes read split partial ...passed 00:07:05.880 Test: blockdev reset ...[2024-11-29 18:20:25.657470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:05.880 [2024-11-29 18:20:25.659222] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:05.880 passed 00:07:05.880 Test: blockdev write read 8 blocks ...passed 00:07:05.880 Test: blockdev write read size > 128k ...passed 00:07:05.880 Test: blockdev write read invalid size ...passed 00:07:05.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.880 Test: blockdev write read max offset ...passed 00:07:05.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.880 Test: blockdev writev readv 8 blocks ...passed 00:07:05.880 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.880 Test: blockdev writev readv block ...passed 00:07:05.880 Test: blockdev writev readv size > 128k ...passed 00:07:05.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.880 Test: blockdev comparev and writev ...passed 00:07:05.880 Test: blockdev nvme passthru rw ...[2024-11-29 18:20:25.663001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3008000 len:0x1000 00:07:05.880 [2024-11-29 18:20:25.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev nvme passthru vendor specific ...[2024-11-29 18:20:25.663473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:05.880 passed 00:07:05.880 Test: blockdev nvme admin passthru ...[2024-11-29 18:20:25.663494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:05.880 passed 00:07:05.880 Test: blockdev copy ...passed 00:07:05.880 Suite: bdevio tests on: Nvme2n1 00:07:05.880 Test: blockdev write read block ...passed 00:07:05.880 Test: blockdev write zeroes read block ...passed 00:07:05.880 Test: blockdev write zeroes read no split ...passed 00:07:05.880 Test: blockdev write zeroes read split ...passed 00:07:05.880 Test: blockdev write zeroes read split partial ...passed 00:07:05.880 Test: blockdev reset ...[2024-11-29 18:20:25.679068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:05.880 passed 00:07:05.880 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.680816] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:05.880 passed 00:07:05.880 Test: blockdev write read size > 128k ...passed 00:07:05.881 Test: blockdev write read invalid size ...passed 00:07:05.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.881 Test: blockdev write read max offset ...passed 00:07:05.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.881 Test: blockdev writev readv 8 blocks ...passed 00:07:05.881 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.881 Test: blockdev writev readv block ...passed 00:07:05.881 Test: blockdev writev readv size > 128k ...passed 00:07:05.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.881 Test: blockdev comparev and writev ...[2024-11-29 18:20:25.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ece3d000 len:0x1000 00:07:05.881 [2024-11-29 18:20:25.684450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.881 passed 00:07:05.881 Test: blockdev nvme passthru rw ...passed 00:07:05.881 Test: blockdev nvme passthru vendor specific ...passed 00:07:05.881 Test: blockdev nvme admin passthru ...[2024-11-29 18:20:25.684912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:05.881 [2024-11-29 18:20:25.684938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:05.881 passed 00:07:05.881 Test: blockdev copy ...passed 00:07:05.881 Suite: bdevio tests on: Nvme1n1p2 00:07:05.881 Test: blockdev write read block ...passed 00:07:05.881 Test: blockdev write zeroes read block ...passed 00:07:05.881 Test: blockdev write zeroes read no split ...passed 00:07:05.881 Test: blockdev write zeroes read split ...passed 00:07:05.881 Test: blockdev write zeroes read split partial ...passed 00:07:05.881 Test: blockdev reset ...[2024-11-29 18:20:25.700227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:05.881 passed 00:07:05.881 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.701759] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:05.881 passed 00:07:05.881 Test: blockdev write read size > 128k ...passed 00:07:05.881 Test: blockdev write read invalid size ...passed 00:07:05.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.881 Test: blockdev write read max offset ...passed 00:07:05.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.881 Test: blockdev writev readv 8 blocks ...passed 00:07:05.881 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.881 Test: blockdev writev readv block ...passed 00:07:05.881 Test: blockdev writev readv size > 128k ...passed 00:07:05.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.881 Test: blockdev comparev and writev ...[2024-11-29 18:20:25.705826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ece39000 len:0x1000 00:07:05.881 [2024-11-29 18:20:25.705871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.881 passed 00:07:05.881 Test: blockdev nvme passthru rw ...passed 00:07:05.881 Test: blockdev nvme passthru vendor specific ...passed 00:07:05.881 Test: blockdev nvme admin passthru ...passed 00:07:05.881 Test: blockdev copy ...passed 00:07:05.881 Suite: bdevio tests on: Nvme1n1p1 00:07:05.881 Test: blockdev write read block ...passed 00:07:05.881 Test: blockdev write zeroes read block ...passed 00:07:05.881 Test: blockdev write zeroes read no split ...passed 00:07:05.881 Test: blockdev write zeroes read split ...passed 00:07:05.881 Test: blockdev write zeroes read split partial ...passed 00:07:05.881 Test: blockdev reset ...[2024-11-29 18:20:25.715948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:05.881 passed 00:07:05.881 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.717245] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:05.881 passed 00:07:05.881 Test: blockdev write read size > 128k ...passed 00:07:05.881 Test: blockdev write read invalid size ...passed 00:07:05.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.881 Test: blockdev write read max offset ...passed 00:07:05.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.881 Test: blockdev writev readv 8 blocks ...passed 00:07:05.881 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.881 Test: blockdev writev readv block ...passed 00:07:05.881 Test: blockdev writev readv size > 128k ...passed 00:07:05.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.881 Test: blockdev comparev and writev ...passed 00:07:05.881 Test: blockdev nvme passthru rw ...passed 00:07:05.881 Test: blockdev nvme passthru vendor specific ...passed 00:07:05.881 Test: blockdev nvme admin passthru ...passed 00:07:05.881 Test: blockdev copy ...[2024-11-29 18:20:25.720715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ece35000 len:0x1000 00:07:05.881 [2024-11-29 18:20:25.720745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:05.881 passed 00:07:05.881 Suite: bdevio tests on: Nvme0n1 00:07:05.881 Test: blockdev write read block ...passed 00:07:05.881 Test: blockdev write zeroes read block ...passed 00:07:05.881 Test: blockdev write zeroes read no split ...passed 00:07:05.881 Test: blockdev write zeroes read split ...passed 00:07:05.881 Test: blockdev write zeroes read split partial ...passed 00:07:05.881 Test: blockdev reset ...[2024-11-29 18:20:25.730429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:05.881 passed 00:07:05.881 Test: blockdev write read 8 blocks ...[2024-11-29 18:20:25.731786] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:05.881 passed 00:07:05.881 Test: blockdev write read size > 128k ...passed 00:07:05.881 Test: blockdev write read invalid size ...passed 00:07:05.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:05.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:05.881 Test: blockdev write read max offset ...passed 00:07:05.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:05.881 Test: blockdev writev readv 8 blocks ...passed 00:07:05.881 Test: blockdev writev readv 30 x 1block ...passed 00:07:05.881 Test: blockdev writev readv block ...passed 00:07:05.881 Test: blockdev writev readv size > 128k ...passed 00:07:05.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:05.881 Test: blockdev comparev and writev ...[2024-11-29 18:20:25.734678] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:05.881 separate metadata which is not supported yet. 00:07:05.881 passed 00:07:05.881 Test: blockdev nvme passthru rw ...passed 00:07:05.881 Test: blockdev nvme passthru vendor specific ...passed 00:07:05.881 Test: blockdev nvme admin passthru ...[2024-11-29 18:20:25.735014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:05.881 [2024-11-29 18:20:25.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:05.881 passed 00:07:05.881 Test: blockdev copy ...passed 00:07:05.881 00:07:05.881 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.881 suites 7 7 n/a 0 0 00:07:05.881 tests 161 161 161 0 0 00:07:05.881 asserts 1025 1025 1025 0 n/a 00:07:05.881 00:07:05.881 Elapsed time = 0.332 seconds 00:07:05.881 0 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73357 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73357 ']' 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73357 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73357 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73357' 00:07:05.882 killing process with pid 73357 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73357 00:07:05.882 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73357 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:06.143 00:07:06.143 real 0m1.299s 00:07:06.143 user 0m3.341s 00:07:06.143 sys 0m0.252s 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.143 ************************************ 00:07:06.143 END TEST bdev_bounds 00:07:06.143 ************************************ 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 18:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:06.143 18:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:06.143 18:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.143 18:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 START TEST bdev_nbd 00:07:06.143 ************************************ 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73405 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73405 /var/tmp/spdk-nbd.sock 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73405 ']' 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 18:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:06.143 [2024-11-29 18:20:26.014294] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:06.143 [2024-11-29 18:20:26.014405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.404 [2024-11-29 18:20:26.170244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.404 [2024-11-29 18:20:26.189616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.975 18:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.234 1+0 records in 00:07:07.234 1+0 records out 00:07:07.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335945 s, 12.2 MB/s 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:07.234 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.493 1+0 records in 00:07:07.493 1+0 records out 00:07:07.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309202 s, 13.2 MB/s 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.493 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:07.494 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.752 1+0 records in 00:07:07.752 1+0 records out 00:07:07.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372135 s, 11.0 MB/s 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:07.752 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.010 1+0 records in 00:07:08.010 1+0 records out 00:07:08.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387749 s, 10.6 MB/s 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.010 18:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.270 1+0 records in 00:07:08.270 1+0 records out 00:07:08.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045797 s, 8.9 MB/s 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.270 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.271 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.271 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.271 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.271 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.271 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.531 1+0 records in 00:07:08.531 1+0 records out 00:07:08.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469181 s, 8.7 MB/s 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.531 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.790 1+0 records in 00:07:08.790 1+0 records out 00:07:08.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371309 s, 11.0 MB/s 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:08.790 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd0", 00:07:09.048 "bdev_name": "Nvme0n1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd1", 00:07:09.048 "bdev_name": "Nvme1n1p1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd2", 00:07:09.048 "bdev_name": "Nvme1n1p2" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd3", 00:07:09.048 "bdev_name": "Nvme2n1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd4", 00:07:09.048 "bdev_name": "Nvme2n2" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd5", 00:07:09.048 "bdev_name": "Nvme2n3" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd6", 00:07:09.048 "bdev_name": "Nvme3n1" 00:07:09.048 } 00:07:09.048 ]' 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd0", 00:07:09.048 "bdev_name": "Nvme0n1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd1", 00:07:09.048 "bdev_name": "Nvme1n1p1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd2", 00:07:09.048 "bdev_name": "Nvme1n1p2" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd3", 00:07:09.048 "bdev_name": "Nvme2n1" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd4", 00:07:09.048 "bdev_name": "Nvme2n2" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd5", 00:07:09.048 "bdev_name": "Nvme2n3" 00:07:09.048 }, 00:07:09.048 { 00:07:09.048 "nbd_device": "/dev/nbd6", 00:07:09.048 "bdev_name": "Nvme3n1" 00:07:09.048 } 00:07:09.048 ]' 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.048 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.306 18:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.306 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.307 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.567 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.826 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:10.084 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.085 18:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.342 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.600 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.601 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.601 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:10.862 /dev/nbd0 00:07:10.862 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.121 1+0 records in 00:07:11.121 1+0 records out 00:07:11.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560708 s, 7.3 MB/s 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:11.121 18:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:11.121 /dev/nbd1 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.121 1+0 records in 00:07:11.121 1+0 records out 00:07:11.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275433 s, 14.9 MB/s 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:11.121 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:11.380 /dev/nbd10 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.380 1+0 records in 00:07:11.380 1+0 records out 00:07:11.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360542 s, 11.4 MB/s 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:11.380 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:11.639 /dev/nbd11 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.639 1+0 records in 00:07:11.639 1+0 records out 00:07:11.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380301 s, 10.8 MB/s 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:11.639 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:11.897 /dev/nbd12 00:07:11.897 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:11.897 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:11.897 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.898 1+0 records in 00:07:11.898 1+0 records out 00:07:11.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458983 s, 8.9 MB/s 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:11.898 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:12.156 /dev/nbd13 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.156 1+0 records in 00:07:12.156 1+0 records out 00:07:12.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381979 s, 10.7 MB/s 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.156 18:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:12.416 /dev/nbd14 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.416 1+0 records in 00:07:12.416 1+0 records out 00:07:12.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395897 s, 10.3 MB/s 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.416 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd0", 00:07:12.677 "bdev_name": "Nvme0n1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd1", 00:07:12.677 "bdev_name": "Nvme1n1p1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd10", 00:07:12.677 "bdev_name": "Nvme1n1p2" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd11", 00:07:12.677 "bdev_name": "Nvme2n1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd12", 00:07:12.677 "bdev_name": "Nvme2n2" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd13", 00:07:12.677 "bdev_name": "Nvme2n3" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd14", 00:07:12.677 "bdev_name": "Nvme3n1" 00:07:12.677 } 00:07:12.677 ]' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd0", 00:07:12.677 "bdev_name": "Nvme0n1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd1", 00:07:12.677 "bdev_name": "Nvme1n1p1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd10", 00:07:12.677 "bdev_name": "Nvme1n1p2" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd11", 00:07:12.677 "bdev_name": "Nvme2n1" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd12", 00:07:12.677 "bdev_name": "Nvme2n2" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd13", 00:07:12.677 "bdev_name": "Nvme2n3" 00:07:12.677 }, 00:07:12.677 { 00:07:12.677 "nbd_device": "/dev/nbd14", 00:07:12.677 "bdev_name": "Nvme3n1" 00:07:12.677 } 00:07:12.677 ]' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.677 /dev/nbd1 00:07:12.677 /dev/nbd10 00:07:12.677 /dev/nbd11 00:07:12.677 /dev/nbd12 00:07:12.677 /dev/nbd13 00:07:12.677 /dev/nbd14' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.677 /dev/nbd1 00:07:12.677 /dev/nbd10 00:07:12.677 /dev/nbd11 00:07:12.677 /dev/nbd12 00:07:12.677 /dev/nbd13 00:07:12.677 /dev/nbd14' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:12.677 256+0 records in 00:07:12.677 256+0 records out 00:07:12.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066382 s, 158 MB/s 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.677 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.939 256+0 records in 00:07:12.939 256+0 records out 00:07:12.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154434 s, 6.8 MB/s 00:07:12.939 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.939 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.939 256+0 records in 00:07:12.939 256+0 records out 00:07:12.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18428 s, 5.7 MB/s 00:07:12.939 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.939 18:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:13.200 256+0 records in 00:07:13.200 256+0 records out 00:07:13.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235686 s, 4.4 MB/s 00:07:13.200 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.200 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:13.485 256+0 records in 00:07:13.485 256+0 records out 00:07:13.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114916 s, 9.1 MB/s 00:07:13.485 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.485 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:13.485 256+0 records in 00:07:13.485 256+0 records out 00:07:13.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212652 s, 4.9 MB/s 00:07:13.485 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.485 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:13.746 256+0 records in 00:07:13.746 256+0 records out 00:07:13.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230649 s, 4.5 MB/s 00:07:13.747 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.747 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:14.008 256+0 records in 00:07:14.008 256+0 records out 00:07:14.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251531 s, 4.2 MB/s 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.008 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.270 18:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.270 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.531 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.790 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.048 18:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.306 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.307 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.307 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.564 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:15.822 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:16.080 malloc_lvol_verify 00:07:16.080 18:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.338 ef615315-7f03-4916-beb0-2dcdd4f54548 00:07:16.338 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.595 33e459eb-204e-4261-be99-14367b34a439 00:07:16.596 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:16.854 /dev/nbd0 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:16.854 mke2fs 1.47.0 (5-Feb-2023) 00:07:16.854 Discarding device blocks: 0/4096 done 00:07:16.854 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:16.854 00:07:16.854 Allocating group tables: 0/1 done 00:07:16.854 Writing inode tables: 0/1 done 00:07:16.854 Creating journal (1024 blocks): done 00:07:16.854 Writing superblocks and filesystem accounting information: 0/1 done 00:07:16.854 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.854 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73405 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73405 ']' 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73405 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73405 00:07:17.112 killing process with pid 73405 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73405' 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73405 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73405 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:17.112 00:07:17.112 real 0m11.041s 00:07:17.112 user 0m15.384s 00:07:17.112 sys 0m3.938s 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.112 ************************************ 00:07:17.112 END TEST bdev_nbd 00:07:17.112 ************************************ 00:07:17.112 18:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:17.370 skipping fio tests on NVMe due to multi-ns failures. 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:17.370 18:20:37 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.370 18:20:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:17.370 18:20:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.370 18:20:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.370 ************************************ 00:07:17.370 START TEST bdev_verify 00:07:17.370 ************************************ 00:07:17.370 18:20:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.370 [2024-11-29 18:20:37.108434] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:17.370 [2024-11-29 18:20:37.108566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73818 ] 00:07:17.370 [2024-11-29 18:20:37.264128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.628 [2024-11-29 18:20:37.284664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.628 [2024-11-29 18:20:37.284820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.885 Running I/O for 5 seconds... 00:07:20.191 24960.00 IOPS, 97.50 MiB/s [2024-11-29T18:20:41.029Z] 25280.00 IOPS, 98.75 MiB/s [2024-11-29T18:20:41.960Z] 23722.67 IOPS, 92.67 MiB/s [2024-11-29T18:20:42.891Z] 22784.00 IOPS, 89.00 MiB/s [2024-11-29T18:20:42.891Z] 22579.20 IOPS, 88.20 MiB/s 00:07:22.986 Latency(us) 00:07:22.986 [2024-11-29T18:20:42.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.986 Nvme0n1 : 5.05 1520.13 5.94 0.00 0.00 83975.72 16535.24 72593.72 00:07:22.986 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.986 Nvme0n1 : 5.04 1650.18 6.45 0.00 0.00 77288.37 13308.85 77030.01 00:07:22.986 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x4ff80 00:07:22.986 Nvme1n1p1 : 5.05 1519.66 5.94 0.00 0.00 83884.39 17644.31 68964.04 00:07:22.986 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:22.986 Nvme1n1p1 : 5.07 1652.71 6.46 0.00 0.00 76812.16 10687.41 74206.92 00:07:22.986 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x4ff7f 00:07:22.986 Nvme1n1p2 : 5.06 1519.21 5.93 0.00 0.00 83788.56 19055.85 68964.04 00:07:22.986 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:22.986 Nvme1n1p2 : 5.09 1661.03 6.49 0.00 0.00 76478.73 10536.17 71383.83 00:07:22.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x80000 00:07:22.986 Nvme2n1 : 5.06 1518.79 5.93 0.00 0.00 83657.37 20568.22 70980.53 00:07:22.986 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x80000 length 0x80000 00:07:22.986 Nvme2n1 : 5.09 1660.59 6.49 0.00 0.00 76370.89 10889.06 66140.95 00:07:22.986 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x80000 00:07:22.986 Nvme2n2 : 5.06 1518.37 5.93 0.00 0.00 83513.26 18854.20 71787.13 00:07:22.986 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x80000 length 0x80000 00:07:22.986 Nvme2n2 : 5.09 1660.15 6.48 0.00 0.00 76263.34 11141.12 68560.74 00:07:22.986 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x80000 00:07:22.986 Nvme2n3 : 5.07 1527.55 5.97 0.00 0.00 82906.50 3226.39 72190.42 00:07:22.986 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x80000 length 0x80000 00:07:22.986 Nvme2n3 : 5.09 1659.70 6.48 0.00 0.00 76165.03 10889.06 74206.92 00:07:22.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x0 length 0x20000 00:07:22.986 Nvme3n1 : 5.08 1535.72 6.00 0.00 0.00 82381.03 9830.40 73400.32 00:07:22.986 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.986 Verification LBA range: start 0x20000 length 0x20000 00:07:22.986 Nvme3n1 : 5.09 1659.27 6.48 0.00 0.00 76109.77 9578.34 77433.30 00:07:22.986 [2024-11-29T18:20:42.891Z] =================================================================================================================== 00:07:22.986 [2024-11-29T18:20:42.891Z] Total : 22263.07 86.97 0.00 0.00 79814.94 3226.39 77433.30 00:07:24.358 00:07:24.358 real 0m6.914s 00:07:24.358 user 0m13.113s 00:07:24.358 sys 0m0.219s 00:07:24.358 18:20:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.358 ************************************ 00:07:24.358 END TEST bdev_verify 00:07:24.358 ************************************ 00:07:24.358 18:20:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:24.358 18:20:44 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:24.358 18:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:24.358 18:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.358 18:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.358 ************************************ 00:07:24.358 START TEST bdev_verify_big_io 00:07:24.358 ************************************ 00:07:24.358 18:20:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:24.358 [2024-11-29 18:20:44.072917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:24.358 [2024-11-29 18:20:44.073036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73910 ] 00:07:24.358 [2024-11-29 18:20:44.232257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.358 [2024-11-29 18:20:44.252763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.358 [2024-11-29 18:20:44.252852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.922 Running I/O for 5 seconds... 00:07:30.170 875.00 IOPS, 54.69 MiB/s [2024-11-29T18:20:51.009Z] 1954.00 IOPS, 122.12 MiB/s [2024-11-29T18:20:51.009Z] 3060.33 IOPS, 191.27 MiB/s 00:07:31.104 Latency(us) 00:07:31.104 [2024-11-29T18:20:51.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0xbd0b 00:07:31.104 Nvme0n1 : 5.75 111.40 6.96 0.00 0.00 1103881.53 13107.20 1419610.58 00:07:31.104 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:31.104 Nvme0n1 : 5.96 91.28 5.71 0.00 0.00 1335965.45 16736.89 1303460.63 00:07:31.104 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0x4ff8 00:07:31.104 Nvme1n1p1 : 5.90 105.19 6.57 0.00 0.00 1123507.04 107277.39 1922927.06 00:07:31.104 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:31.104 Nvme1n1p1 : 5.96 89.24 5.58 0.00 0.00 1306856.83 104857.60 1129235.69 00:07:31.104 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0x4ff7 00:07:31.104 Nvme1n1p2 : 5.98 109.55 6.85 0.00 0.00 1030942.31 131475.30 1367988.38 00:07:31.104 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:31.104 Nvme1n1p2 : 6.05 93.28 5.83 0.00 0.00 1242418.83 58881.58 1703532.70 00:07:31.104 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0x8000 00:07:31.104 Nvme2n1 : 6.03 114.06 7.13 0.00 0.00 974331.57 50613.96 2013265.92 00:07:31.104 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x8000 length 0x8000 00:07:31.104 Nvme2n1 : 6.05 96.89 6.06 0.00 0.00 1168856.93 23391.31 1484138.34 00:07:31.104 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0x8000 00:07:31.104 Nvme2n2 : 6.09 117.64 7.35 0.00 0.00 907832.72 47185.92 2064888.12 00:07:31.104 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x8000 length 0x8000 00:07:31.104 Nvme2n2 : 6.11 101.12 6.32 0.00 0.00 1076851.59 23693.78 1219574.55 00:07:31.104 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.104 Verification LBA range: start 0x0 length 0x8000 00:07:31.105 Nvme2n3 : 6.20 131.82 8.24 0.00 0.00 779993.66 21778.12 2103604.78 00:07:31.105 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.105 Verification LBA range: start 0x8000 length 0x8000 00:07:31.105 Nvme2n3 : 6.12 104.63 6.54 0.00 0.00 1010360.95 64124.46 1213121.77 00:07:31.105 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.105 Verification LBA range: start 0x0 length 0x2000 00:07:31.105 Nvme3n1 : 6.30 190.72 11.92 0.00 0.00 528378.85 425.35 2129415.88 00:07:31.105 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.105 Verification LBA range: start 0x2000 length 0x2000 00:07:31.105 Nvme3n1 : 6.17 120.52 7.53 0.00 0.00 852381.50 3478.45 1219574.55 00:07:31.105 [2024-11-29T18:20:51.010Z] =================================================================================================================== 00:07:31.105 [2024-11-29T18:20:51.010Z] Total : 1577.34 98.58 0.00 0.00 985277.34 425.35 2129415.88 00:07:33.002 00:07:33.002 real 0m8.376s 00:07:33.002 user 0m16.039s 00:07:33.002 sys 0m0.204s 00:07:33.002 18:20:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.002 18:20:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:33.002 ************************************ 00:07:33.002 END TEST bdev_verify_big_io 00:07:33.002 ************************************ 00:07:33.002 18:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.002 18:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.002 18:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.002 18:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.002 ************************************ 00:07:33.002 START TEST bdev_write_zeroes 00:07:33.002 ************************************ 00:07:33.002 18:20:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.002 [2024-11-29 18:20:52.509023] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:33.002 [2024-11-29 18:20:52.509141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74014 ] 00:07:33.002 [2024-11-29 18:20:52.668448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.002 [2024-11-29 18:20:52.688023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.258 Running I/O for 1 seconds... 00:07:34.626 60480.00 IOPS, 236.25 MiB/s 00:07:34.626 Latency(us) 00:07:34.626 [2024-11-29T18:20:54.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.626 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme0n1 : 1.03 8597.34 33.58 0.00 0.00 14849.05 6906.49 30045.74 00:07:34.626 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme1n1p1 : 1.03 8581.80 33.52 0.00 0.00 14846.94 10737.82 29844.09 00:07:34.626 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme1n1p2 : 1.03 8567.51 33.47 0.00 0.00 14820.62 10838.65 29037.49 00:07:34.626 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n1 : 1.03 8553.11 33.41 0.00 0.00 14799.55 10132.87 28029.24 00:07:34.626 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n2 : 1.03 8539.17 33.36 0.00 0.00 14801.95 10183.29 27424.30 00:07:34.626 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n3 : 1.04 8526.07 33.30 0.00 0.00 14781.37 8570.09 28230.89 00:07:34.626 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme3n1 : 1.04 8512.80 33.25 0.00 0.00 14777.79 8065.97 30247.38 00:07:34.626 [2024-11-29T18:20:54.531Z] =================================================================================================================== 00:07:34.626 [2024-11-29T18:20:54.531Z] Total : 59877.80 233.90 0.00 0.00 14811.04 6906.49 30247.38 00:07:34.626 00:07:34.626 real 0m1.849s 00:07:34.626 user 0m1.567s 00:07:34.626 sys 0m0.169s 00:07:34.626 18:20:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.626 18:20:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:34.626 ************************************ 00:07:34.626 END TEST bdev_write_zeroes 00:07:34.626 ************************************ 00:07:34.626 18:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.626 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:34.626 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.626 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.626 ************************************ 00:07:34.626 START TEST bdev_json_nonenclosed 00:07:34.626 ************************************ 00:07:34.626 18:20:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.626 [2024-11-29 18:20:54.415216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:34.626 [2024-11-29 18:20:54.415326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74055 ] 00:07:34.883 [2024-11-29 18:20:54.570530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.883 [2024-11-29 18:20:54.589909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.883 [2024-11-29 18:20:54.590000] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:34.883 [2024-11-29 18:20:54.590018] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:34.883 [2024-11-29 18:20:54.590031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.883 00:07:34.883 real 0m0.299s 00:07:34.883 user 0m0.115s 00:07:34.883 sys 0m0.079s 00:07:34.883 18:20:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.883 ************************************ 00:07:34.883 END TEST bdev_json_nonenclosed 00:07:34.883 ************************************ 00:07:34.883 18:20:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:34.883 18:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.883 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:34.883 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.883 18:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.883 ************************************ 00:07:34.883 START TEST bdev_json_nonarray 00:07:34.883 ************************************ 00:07:34.883 18:20:54 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.883 [2024-11-29 18:20:54.760010] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:34.883 [2024-11-29 18:20:54.760166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74076 ] 00:07:35.141 [2024-11-29 18:20:54.920087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.141 [2024-11-29 18:20:54.941851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.141 [2024-11-29 18:20:54.941946] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:35.141 [2024-11-29 18:20:54.941966] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.141 [2024-11-29 18:20:54.941977] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.141 00:07:35.141 real 0m0.306s 00:07:35.141 user 0m0.120s 00:07:35.141 sys 0m0.083s 00:07:35.141 18:20:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.141 ************************************ 00:07:35.141 END TEST bdev_json_nonarray 00:07:35.141 ************************************ 00:07:35.141 18:20:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:35.399 18:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:35.399 18:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:35.399 18:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:35.399 18:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.399 18:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.399 18:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.399 ************************************ 00:07:35.399 START TEST bdev_gpt_uuid 00:07:35.399 ************************************ 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74096 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74096 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74096 ']' 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.399 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.399 [2024-11-29 18:20:55.143592] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:35.399 [2024-11-29 18:20:55.143722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74096 ] 00:07:35.399 [2024-11-29 18:20:55.302796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.657 [2024-11-29 18:20:55.325084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.220 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.220 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:36.220 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.220 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.220 18:20:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.477 Some configs were skipped because the RPC state that can call them passed over. 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:36.477 { 00:07:36.477 "name": "Nvme1n1p1", 00:07:36.477 "aliases": [ 00:07:36.477 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:36.477 ], 00:07:36.477 "product_name": "GPT Disk", 00:07:36.477 "block_size": 4096, 00:07:36.477 "num_blocks": 655104, 00:07:36.477 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.477 "assigned_rate_limits": { 00:07:36.477 "rw_ios_per_sec": 0, 00:07:36.477 "rw_mbytes_per_sec": 0, 00:07:36.477 "r_mbytes_per_sec": 0, 00:07:36.477 "w_mbytes_per_sec": 0 00:07:36.477 }, 00:07:36.477 "claimed": false, 00:07:36.477 "zoned": false, 00:07:36.477 "supported_io_types": { 00:07:36.477 "read": true, 00:07:36.477 "write": true, 00:07:36.477 "unmap": true, 00:07:36.477 "flush": true, 00:07:36.477 "reset": true, 00:07:36.477 "nvme_admin": false, 00:07:36.477 "nvme_io": false, 00:07:36.477 "nvme_io_md": false, 00:07:36.477 "write_zeroes": true, 00:07:36.477 "zcopy": false, 00:07:36.477 "get_zone_info": false, 00:07:36.477 "zone_management": false, 00:07:36.477 "zone_append": false, 00:07:36.477 "compare": true, 00:07:36.477 "compare_and_write": false, 00:07:36.477 "abort": true, 00:07:36.477 "seek_hole": false, 00:07:36.477 "seek_data": false, 00:07:36.477 "copy": true, 00:07:36.477 "nvme_iov_md": false 00:07:36.477 }, 00:07:36.477 "driver_specific": { 00:07:36.477 "gpt": { 00:07:36.477 "base_bdev": "Nvme1n1", 00:07:36.477 "offset_blocks": 256, 00:07:36.477 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:36.477 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.477 "partition_name": "SPDK_TEST_first" 00:07:36.477 } 00:07:36.477 } 00:07:36.477 } 00:07:36.477 ]' 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:36.477 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:36.735 { 00:07:36.735 "name": "Nvme1n1p2", 00:07:36.735 "aliases": [ 00:07:36.735 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:36.735 ], 00:07:36.735 "product_name": "GPT Disk", 00:07:36.735 "block_size": 4096, 00:07:36.735 "num_blocks": 655103, 00:07:36.735 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.735 "assigned_rate_limits": { 00:07:36.735 "rw_ios_per_sec": 0, 00:07:36.735 "rw_mbytes_per_sec": 0, 00:07:36.735 "r_mbytes_per_sec": 0, 00:07:36.735 "w_mbytes_per_sec": 0 00:07:36.735 }, 00:07:36.735 "claimed": false, 00:07:36.735 "zoned": false, 00:07:36.735 "supported_io_types": { 00:07:36.735 "read": true, 00:07:36.735 "write": true, 00:07:36.735 "unmap": true, 00:07:36.735 "flush": true, 00:07:36.735 "reset": true, 00:07:36.735 "nvme_admin": false, 00:07:36.735 "nvme_io": false, 00:07:36.735 "nvme_io_md": false, 00:07:36.735 "write_zeroes": true, 00:07:36.735 "zcopy": false, 00:07:36.735 "get_zone_info": false, 00:07:36.735 "zone_management": false, 00:07:36.735 "zone_append": false, 00:07:36.735 "compare": true, 00:07:36.735 "compare_and_write": false, 00:07:36.735 "abort": true, 00:07:36.735 "seek_hole": false, 00:07:36.735 "seek_data": false, 00:07:36.735 "copy": true, 00:07:36.735 "nvme_iov_md": false 00:07:36.735 }, 00:07:36.735 "driver_specific": { 00:07:36.735 "gpt": { 00:07:36.735 "base_bdev": "Nvme1n1", 00:07:36.735 "offset_blocks": 655360, 00:07:36.735 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:36.735 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.735 "partition_name": "SPDK_TEST_second" 00:07:36.735 } 00:07:36.735 } 00:07:36.735 } 00:07:36.735 ]' 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:36.735 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 74096 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74096 ']' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74096 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74096 00:07:36.736 killing process with pid 74096 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74096' 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74096 00:07:36.736 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74096 00:07:36.994 00:07:36.994 real 0m1.785s 00:07:36.994 user 0m1.958s 00:07:36.994 sys 0m0.357s 00:07:36.994 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.994 ************************************ 00:07:36.994 END TEST bdev_gpt_uuid 00:07:36.994 ************************************ 00:07:36.994 18:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.994 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:36.994 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:36.994 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:36.994 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:37.251 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.251 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:37.251 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:37.251 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:37.251 18:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.509 Waiting for block devices as requested 00:07:37.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.767 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.767 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.031 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:43.031 18:21:02 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:43.031 18:21:02 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:43.290 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:43.290 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:43.290 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:43.290 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:43.290 18:21:02 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:43.290 00:07:43.290 real 0m49.897s 00:07:43.290 user 1m3.782s 00:07:43.290 sys 0m7.917s 00:07:43.290 18:21:02 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.290 ************************************ 00:07:43.290 END TEST blockdev_nvme_gpt 00:07:43.290 ************************************ 00:07:43.290 18:21:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.290 18:21:03 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:43.290 18:21:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.290 18:21:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.290 18:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.290 ************************************ 00:07:43.290 START TEST nvme 00:07:43.290 ************************************ 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:43.290 * Looking for test storage... 00:07:43.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.290 18:21:03 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.290 18:21:03 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.290 18:21:03 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.290 18:21:03 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.290 18:21:03 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.290 18:21:03 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:43.290 18:21:03 nvme -- scripts/common.sh@345 -- # : 1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.290 18:21:03 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.290 18:21:03 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@353 -- # local d=1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.290 18:21:03 nvme -- scripts/common.sh@355 -- # echo 1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.290 18:21:03 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@353 -- # local d=2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.290 18:21:03 nvme -- scripts/common.sh@355 -- # echo 2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.290 18:21:03 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.290 18:21:03 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.290 18:21:03 nvme -- scripts/common.sh@368 -- # return 0 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.290 --rc genhtml_branch_coverage=1 00:07:43.290 --rc genhtml_function_coverage=1 00:07:43.290 --rc genhtml_legend=1 00:07:43.290 --rc geninfo_all_blocks=1 00:07:43.290 --rc geninfo_unexecuted_blocks=1 00:07:43.290 00:07:43.290 ' 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.290 --rc genhtml_branch_coverage=1 00:07:43.290 --rc genhtml_function_coverage=1 00:07:43.290 --rc genhtml_legend=1 00:07:43.290 --rc geninfo_all_blocks=1 00:07:43.290 --rc geninfo_unexecuted_blocks=1 00:07:43.290 00:07:43.290 ' 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.290 --rc genhtml_branch_coverage=1 00:07:43.290 --rc genhtml_function_coverage=1 00:07:43.290 --rc genhtml_legend=1 00:07:43.290 --rc geninfo_all_blocks=1 00:07:43.290 --rc geninfo_unexecuted_blocks=1 00:07:43.290 00:07:43.290 ' 00:07:43.290 18:21:03 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.290 --rc genhtml_branch_coverage=1 00:07:43.290 --rc genhtml_function_coverage=1 00:07:43.290 --rc genhtml_legend=1 00:07:43.290 --rc geninfo_all_blocks=1 00:07:43.290 --rc geninfo_unexecuted_blocks=1 00:07:43.290 00:07:43.290 ' 00:07:43.290 18:21:03 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:43.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:44.423 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.423 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.423 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.423 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.423 18:21:04 nvme -- nvme/nvme.sh@79 -- # uname 00:07:44.423 Waiting for stub to ready for secondary processes... 00:07:44.423 18:21:04 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:44.423 18:21:04 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:44.423 18:21:04 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1075 -- # stubpid=74721 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74721 ]] 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:44.423 18:21:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:44.423 [2024-11-29 18:21:04.277731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:44.423 [2024-11-29 18:21:04.277842] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:45.359 [2024-11-29 18:21:05.045953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.359 [2024-11-29 18:21:05.059102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.359 [2024-11-29 18:21:05.059387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.359 [2024-11-29 18:21:05.059427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.359 [2024-11-29 18:21:05.070574] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:45.359 [2024-11-29 18:21:05.070625] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.359 [2024-11-29 18:21:05.084100] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:45.359 [2024-11-29 18:21:05.084306] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:45.359 [2024-11-29 18:21:05.086013] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.359 [2024-11-29 18:21:05.086360] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:45.359 [2024-11-29 18:21:05.086643] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:45.359 [2024-11-29 18:21:05.088111] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.359 [2024-11-29 18:21:05.088414] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:45.359 [2024-11-29 18:21:05.088864] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:45.359 [2024-11-29 18:21:05.091085] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.359 [2024-11-29 18:21:05.091540] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:45.359 [2024-11-29 18:21:05.091676] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:45.359 [2024-11-29 18:21:05.091822] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:45.359 [2024-11-29 18:21:05.092179] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:45.359 done. 00:07:45.359 18:21:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:45.359 18:21:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:45.359 18:21:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:45.359 18:21:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:45.359 18:21:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.359 18:21:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.619 ************************************ 00:07:45.619 START TEST nvme_reset 00:07:45.619 ************************************ 00:07:45.619 18:21:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:45.619 Initializing NVMe Controllers 00:07:45.619 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:45.619 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:45.619 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:45.619 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:45.619 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:45.619 ************************************ 00:07:45.619 END TEST nvme_reset 00:07:45.619 ************************************ 00:07:45.619 00:07:45.619 real 0m0.206s 00:07:45.619 user 0m0.079s 00:07:45.619 sys 0m0.083s 00:07:45.619 18:21:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.619 18:21:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:45.900 18:21:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:45.900 18:21:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.900 18:21:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.900 18:21:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.900 ************************************ 00:07:45.900 START TEST nvme_identify 00:07:45.900 ************************************ 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:45.900 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:45.900 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:45.900 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:45.900 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:45.900 18:21:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:45.900 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:45.900 ===================================================== 00:07:45.900 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.900 ===================================================== 00:07:45.900 Controller Capabilities/Features 00:07:45.900 ================================ 00:07:45.900 Vendor ID: 1b36 00:07:45.900 Subsystem Vendor ID: 1af4 00:07:45.900 Serial Number: 12341 00:07:45.900 Model Number: QEMU NVMe Ctrl 00:07:45.900 Firmware Version: 8.0.0 00:07:45.900 Recommended Arb Burst: 6 00:07:45.900 IEEE OUI Identifier: 00 54 52 00:07:45.900 Multi-path I/O 00:07:45.900 May have multiple subsystem ports: No 00:07:45.900 May have multiple controllers: No 00:07:45.900 Associated with SR-IOV VF: No 00:07:45.900 Max Data Transfer Size: 524288 00:07:45.900 Max Number of Namespaces: 256 00:07:45.900 Max Number of I/O Queues: 64 00:07:45.900 NVMe Specification Version (VS): 1.4 00:07:45.900 NVMe Specification Version (Identify): 1.4 00:07:45.900 Maximum Queue Entries: 2048 00:07:45.900 Contiguous Queues Required: Yes 00:07:45.900 Arbitration Mechanisms Supported 00:07:45.900 Weighted Round Robin: Not Supported 00:07:45.900 Vendor Specific: Not Supported 00:07:45.900 Reset Timeout: 7500 ms 00:07:45.900 Doorbell Stride: 4 bytes 00:07:45.900 NVM Subsystem Reset: Not Supported 00:07:45.900 Command Sets Supported 00:07:45.900 NVM Command Set: Supported 00:07:45.900 Boot Partition: Not Supported 00:07:45.900 Memory Page Size Minimum: 4096 bytes 00:07:45.900 Memory Page Size Maximum: 65536 bytes 00:07:45.900 Persistent Memory Region: Not Supported 00:07:45.900 Optional Asynchronous Events Supported 00:07:45.900 Namespace Attribute Notices: Supported 00:07:45.900 Firmware Activation Notices: Not Supported 00:07:45.900 ANA Change Notices: Not Supported 00:07:45.900 PLE Aggregate Log Change Notices: Not Supported 00:07:45.900 LBA Status Info Alert Notices: Not Supported 00:07:45.900 EGE Aggregate Log Change Notices: Not Supported 00:07:45.900 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.900 Zone Descriptor Change Notices: Not Supported 00:07:45.900 Discovery Log Change Notices: Not Supported 00:07:45.900 Controller Attributes 00:07:45.900 128-bit Host Identifier: Not Supported 00:07:45.900 Non-Operational Permissive Mode: Not Supported 00:07:45.900 NVM Sets: Not Supported 00:07:45.900 Read Recovery Levels: Not Supported 00:07:45.901 Endurance Groups: Not Supported 00:07:45.901 Predictable Latency Mode: Not Supported 00:07:45.901 Traffic Based Keep ALive: Not Supported 00:07:45.901 Namespace Granularity: Not Supported 00:07:45.901 SQ Associations: Not Supported 00:07:45.901 UUID List: Not Supported 00:07:45.901 Multi-Domain Subsystem: Not Supported 00:07:45.901 Fixed Capacity Management: Not Supported 00:07:45.901 Variable Capacity Management: Not Supported 00:07:45.901 Delete Endurance Group: Not Supported 00:07:45.901 Delete NVM Set: Not Supported 00:07:45.901 Extended LBA Formats Supported: Supported 00:07:45.901 Flexible Data Placement Supported: Not Supported 00:07:45.901 00:07:45.901 Controller Memory Buffer Support 00:07:45.901 ================================ 00:07:45.901 Supported: No 00:07:45.901 00:07:45.901 Persistent Memory Region Support 00:07:45.901 ================================ 00:07:45.901 Supported: No 00:07:45.901 00:07:45.901 Admin Command Set Attributes 00:07:45.901 ============================ 00:07:45.901 Security Send/Receive: Not Supported 00:07:45.901 Format NVM: Supported 00:07:45.901 Firmware Activate/Download: Not Supported 00:07:45.901 Namespace Management: Supported 00:07:45.901 Device Self-Test: Not Supported 00:07:45.901 Directives: Supported 00:07:45.901 NVMe-MI: Not Supported 00:07:45.901 Virtualization Management: Not Supported 00:07:45.901 Doorbell Buffer Config: Supported 00:07:45.901 Get LBA Status Capability: Not Supported 00:07:45.901 Command & Feature Lockdown Capability: Not Supported 00:07:45.901 Abort Command Limit: 4 00:07:45.901 Async Event Request Limit: 4 00:07:45.901 Number of Firmware Slots: N/A 00:07:45.901 Firmware Slot 1 Read-Only: N/A 00:07:45.901 Firmware Activation Without Reset: N/A 00:07:45.901 Multiple Update Detection Support: N/A 00:07:45.901 Firmware Update Granularity: No Information Provided 00:07:45.901 Per-Namespace SMART Log: Yes 00:07:45.901 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.901 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:45.901 Command Effects Log Page: Supported 00:07:45.901 Get Log Page Extended Data: Supported 00:07:45.901 Telemetry Log Pages: Not Supported 00:07:45.901 Persistent Event Log Pages: Not Supported 00:07:45.901 Supported Log Pages Log Page: May Support 00:07:45.901 Commands Supported & Effects Log Page: Not Supported 00:07:45.901 Feature Identifiers & Effects Log Page:May Support 00:07:45.901 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.901 Data Area 4 for Telemetry Log: Not Supported 00:07:45.901 Error Log Page Entries Supported: 1 00:07:45.901 Keep Alive: Not Supported 00:07:45.901 00:07:45.901 NVM Command Set Attributes 00:07:45.901 ========================== 00:07:45.901 Submission Queue Entry Size 00:07:45.901 Max: 64 00:07:45.901 Min: 64 00:07:45.901 Completion Queue Entry Size 00:07:45.901 Max: 16 00:07:45.901 Min: 16 00:07:45.901 Number of Namespaces: 256 00:07:45.901 Compare Command: Supported 00:07:45.901 Write Uncorrectable Command: Not Supported 00:07:45.901 Dataset Management Command: Supported 00:07:45.901 Write Zeroes Command: Supported 00:07:45.901 Set Features Save Field: Supported 00:07:45.901 Reservations: Not Supported 00:07:45.901 Timestamp: Supported 00:07:45.901 Copy: Supported 00:07:45.901 Volatile Write Cache: Present 00:07:45.901 Atomic Write Unit (Normal): 1 00:07:45.901 Atomic Write Unit (PFail): 1 00:07:45.901 Atomic Compare & Write Unit: 1 00:07:45.901 Fused Compare & Write: Not Supported 00:07:45.901 Scatter-Gather List 00:07:45.901 SGL Command Set: Supported 00:07:45.901 SGL Keyed: Not Supported 00:07:45.901 SGL Bit Bucket Descriptor: Not Supported 00:07:45.901 SGL Metadata Pointer: Not Supported 00:07:45.901 Oversized SGL: Not Supported 00:07:45.901 SGL Metadata Address: Not Supported 00:07:45.901 SGL Offset: Not Supported 00:07:45.901 Transport SGL Data Block: Not Supported 00:07:45.901 Replay Protected Memory Block: Not Supported 00:07:45.901 00:07:45.901 Firmware Slot Information 00:07:45.901 ========================= 00:07:45.901 Active slot: 1 00:07:45.901 Slot 1 Firmware Revision: 1.0 00:07:45.901 00:07:45.901 00:07:45.901 Commands Supported and Effects 00:07:45.901 ============================== 00:07:45.901 Admin Commands 00:07:45.901 -------------- 00:07:45.901 Delete I/O Submission Queue (00h): Supported 00:07:45.901 Create I/O Submission Queue (01h): Supported 00:07:45.901 Get Log Page (02h): Supported 00:07:45.901 Delete I/O Completion Queue (04h): Supported 00:07:45.901 Create I/O Completion Queue (05h): Supported 00:07:45.901 Identify (06h): Supported 00:07:45.901 Abort (08h): Supported 00:07:45.901 Set Features (09h): Supported 00:07:45.901 Get Features (0Ah): Supported 00:07:45.901 Asynchronous Event Request (0Ch): Supported 00:07:45.901 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.901 Directive Send (19h): Supported 00:07:45.901 Directive Receive (1Ah): Supported 00:07:45.901 Virtualization Management (1Ch): Supported 00:07:45.901 Doorbell Buffer Config (7Ch): Supported 00:07:45.901 Format NVM (80h): Supported LBA-Change 00:07:45.901 I/O Commands 00:07:45.901 ------------ 00:07:45.901 Flush (00h): Supported LBA-Change 00:07:45.901 Write (01h): Supported LBA-Change 00:07:45.901 Read (02h): Supported 00:07:45.901 Compare (05h): Supported 00:07:45.901 Write Zeroes (08h): Supported LBA-Change 00:07:45.901 Dataset Management (09h): Supported LBA-Change 00:07:45.901 Unknown (0Ch): Supported 00:07:45.901 Unknown (12h): Supported 00:07:45.901 Copy (19h): Supported LBA-Change 00:07:45.901 Unknown (1Dh): Supported LBA-Change 00:07:45.901 00:07:45.901 Error Log 00:07:45.901 ========= 00:07:45.901 00:07:45.901 Arbitration 00:07:45.901 =========== 00:07:45.901 Arbitration Burst: no limit 00:07:45.901 00:07:45.901 Power Management 00:07:45.901 ================ 00:07:45.901 Number of Power States: 1 00:07:45.901 Current Power State: Power State #0 00:07:45.901 Power State #0: 00:07:45.901 Max Power: 25.00 W 00:07:45.901 Non-Operational State: Operational 00:07:45.901 Entry Latency: 16 microseconds 00:07:45.901 Exit Latency: 4 microseconds 00:07:45.901 Relative Read Throughput: 0 00:07:45.901 Relative Read Latency: 0 00:07:45.901 Relative Write Throughput: 0 00:07:45.901 Relative Write Latency: 0 00:07:45.901 Idle Power: Not Reported 00:07:45.901 Active Power: Not Reported 00:07:45.901 Non-Operational Permissive Mode: Not Supported 00:07:45.901 00:07:45.901 Health Information 00:07:45.901 ================== 00:07:45.901 Critical Warnings: 00:07:45.901 Available Spare Space: OK 00:07:45.901 Temperature: OK 00:07:45.901 Device Reliability: OK 00:07:45.901 Read Only: No 00:07:45.901 Volatile Memory Backup: OK 00:07:45.901 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.901 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.901 Available Spare: 0% 00:07:45.901 Available Spare Threshold: 0% 00:07:45.901 Life Percentage Used: 0% 00:07:45.901 Data Units Read: 1001 00:07:45.901 Data Units Written: 868 00:07:45.901 Host Read Commands: 54148 00:07:45.901 Host Write Commands: 52931 00:07:45.901 Controller Busy Time: 0 minutes 00:07:45.901 Power Cycles: 0 00:07:45.901 Power On Hours: 0 hours 00:07:45.901 Unsafe Shutdowns: 0 00:07:45.901 Unrecoverable Media Errors: 0 00:07:45.901 Lifetime Error Log Entries: 0 00:07:45.901 Warning Temperature Time: 0 minutes 00:07:45.901 Critical Temperature Time: 0 minutes 00:07:45.901 00:07:45.901 Number of Queues 00:07:45.901 ================ 00:07:45.901 Number of I/O Submission Queues: 64 00:07:45.901 Number of I/O Completion Queues: 64 00:07:45.901 00:07:45.901 ZNS Specific Controller Data 00:07:45.901 ============================ 00:07:45.902 Zone Append Size Limit: 0 00:07:45.902 00:07:45.902 00:07:45.902 Active Namespaces 00:07:45.902 ================= 00:07:45.902 Namespace ID:1 00:07:45.902 Error Recovery Timeout: Unlimited 00:07:45.902 Command Set Identifier: NVM (00h) 00:07:45.902 Deallocate: Supported 00:07:45.902 Deallocated/Unwritten Error: Supported 00:07:45.902 Deallocated Read Value: All 0x00 00:07:45.902 Deallocate in Write Zeroes: Not Supported 00:07:45.902 Deallocated Guard Field: 0xFFFF 00:07:45.902 Flush: Supported 00:07:45.902 Reservation: Not Supported 00:07:45.902 Namespace Sharing Capabilities: Private 00:07:45.902 Size (in LBAs): 1310720 (5GiB) 00:07:45.902 Capacity (in LBAs): 1310720 (5GiB) 00:07:45.902 Utilization (in LBAs): 1310720 (5GiB) 00:07:45.902 Thin Provisioning: Not Supported 00:07:45.902 Per-NS Atomic Units: No 00:07:45.902 Maximum Single Source Range Length: 128 00:07:45.902 Maximum Copy Length: 128 00:07:45.902 Maximum Source Range Count: 128 00:07:45.902 NGUID/EUI64 Never Reused: No 00:07:45.902 Namespace Write Protected: No 00:07:45.902 Number of LBA Formats: 8 00:07:45.902 Current LBA Format: LBA Format #04 00:07:45.902 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.902 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.902 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.902 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.902 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.902 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.902 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.902 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.902 00:07:45.902 NVM Specific Namespace Data 00:07:45.902 =========================== 00:07:45.902 Logical Block Storage Tag Mask: 0 00:07:45.902 Protection Information Capabilities: 00:07:45.902 16b Guard Protection Information Storage Tag Support: No 00:07:45.902 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.902 Storage Tag Check Read Support: No 00:07:45.902 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.902 ===================================================== 00:07:45.902 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.902 ===================================================== 00:07:45.902 Controller Capabilities/Features 00:07:45.902 ================================ 00:07:45.902 Vendor ID: 1b36 00:07:45.902 Subsystem Vendor ID: 1af4 00:07:45.902 Serial Number: 12343 00:07:45.902 Model Number: QEMU NVMe Ctrl 00:07:45.902 Firmware Version: 8.0.0 00:07:45.902 Recommended Arb Burst: 6 00:07:45.902 IEEE OUI Identifier: 00 54 52 00:07:45.902 Multi-path I/O 00:07:45.902 May have multiple subsystem ports: No 00:07:45.902 May have multiple controllers: Yes 00:07:45.902 Associated with SR-IOV VF: No 00:07:45.902 Max Data Transfer Size: 524288 00:07:45.902 Max Number of Namespaces: 256 00:07:45.902 Max Number of I/O Queues: 64 00:07:45.902 NVMe Specification Version (VS): 1.4 00:07:45.902 NVMe Specification Version (Identify): 1.4 00:07:45.902 Maximum Queue Entries: 2048 00:07:45.902 Contiguous Queues Required: Yes 00:07:45.902 Arbitration Mechanisms Supported 00:07:45.902 Weighted Round Robin: Not Supported 00:07:45.902 Vendor Specific: Not Supported 00:07:45.902 Reset Timeout: 7500 ms 00:07:45.902 Doorbell Stride: 4 bytes 00:07:45.902 NVM Subsystem Reset: Not Supported 00:07:45.902 Command Sets Supported 00:07:45.902 NVM Command Set: Supported 00:07:45.902 Boot Partition: Not Supported 00:07:45.902 Memory Page Size Minimum: 4096 bytes 00:07:45.902 Memory Page Size Maximum: 65536 bytes 00:07:45.902 Persistent Memory Region: Not Supported 00:07:45.902 Optional Asynchronous Events Supported 00:07:45.902 Namespace Attribute Notices: Supported 00:07:45.902 Firmware Activation Notices: Not Supported 00:07:45.902 ANA Change Notices: Not Supported 00:07:45.902 PLE Aggregate Log Change Notices: Not Supported 00:07:45.902 LBA Status Info Alert Notices: Not Supported 00:07:45.902 EGE Aggregate Log Change Notices: Not Supported 00:07:45.902 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.902 Zone Descriptor Change Notices: Not Supported 00:07:45.902 Discovery Log Change Notices: Not Supported 00:07:45.902 Controller Attributes 00:07:45.902 128-bit Host Identifier: Not Supported 00:07:45.902 Non-Operational Permissive Mode: Not Supported 00:07:45.902 NVM Sets: Not Supported 00:07:45.902 Read Recovery Levels: Not Supported 00:07:45.902 Endurance Groups: Supported 00:07:45.902 Predictable Latency Mode: Not Supported 00:07:45.902 Traffic Based Keep ALive: Not Supported 00:07:45.902 Namespace Granularity: Not Supported 00:07:45.902 SQ Associations: Not Supported 00:07:45.902 UUID List: Not Supported 00:07:45.902 Multi-Domain Subsystem: Not Supported 00:07:45.902 Fixed Capacity Management: Not Supported 00:07:45.902 Variable Capacity Management: Not Supported 00:07:45.902 Delete Endurance Group: Not Supported 00:07:45.902 Delete NVM Set: Not Supported 00:07:45.902 Extended LBA Formats Supported: Supported 00:07:45.902 Flexible Data Placement Supported: Supported 00:07:45.902 00:07:45.902 Controller Memory Buffer Support 00:07:45.902 ================================ 00:07:45.902 Supported: No 00:07:45.902 00:07:45.902 Persistent Memory Region Support 00:07:45.902 ================================ 00:07:45.902 Supported: No 00:07:45.902 00:07:45.902 Admin Command Set Attributes 00:07:45.902 ============================ 00:07:45.902 Security Send/Receive: Not Supported 00:07:45.902 Format NVM: Supported 00:07:45.902 Firmware Activate/Download: Not Supported 00:07:45.902 Namespace Management: Supported 00:07:45.902 Device Self-Test: Not Supported 00:07:45.902 Directives: Supported 00:07:45.902 NVMe-MI: Not Supported 00:07:45.902 Virtualization Management: Not Supported 00:07:45.902 Doorbell Buffer Config: Supported 00:07:45.902 Get LBA Status Capability: Not Supported 00:07:45.902 Command & Feature Lockdown Capability: Not Supported 00:07:45.902 Abort Command Limit: 4 00:07:45.902 Async Event Request Limit: 4 00:07:45.902 Number of Firmware Slots: N/A 00:07:45.902 Firmware Slot 1 Read-Only: N/A 00:07:45.902 Firmware Activation Without Reset: N/A 00:07:45.902 Multiple Update Detection Support: N/A 00:07:45.902 Firmware Update Granularity: No Information Provided 00:07:45.902 Per-Namespace SMART Log: Yes 00:07:45.902 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.902 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:45.902 Command Effects Log Page: Supported 00:07:45.902 Get Log Page Extended Data: Supported 00:07:45.902 Telemetry Log Pages: Not Supported 00:07:45.902 Persistent Event Log Pages: Not Supported 00:07:45.902 Supported Log Pages Log Page: May Support 00:07:45.902 Commands Supported & Effects Log Page: Not Supported 00:07:45.902 Feature Identifiers & Effects Log Page:May Support 00:07:45.902 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.902 Data Area 4 for Telemetry Log: Not Supported 00:07:45.902 Error Log Page Entries Supported: 1 00:07:45.902 Keep Alive: Not Supported 00:07:45.902 00:07:45.902 NVM Command Set Attributes 00:07:45.902 ========================== 00:07:45.902 Submission Queue Entry Size 00:07:45.902 Max: 64 00:07:45.902 Min: 64 00:07:45.902 Completion Queue Entry Size 00:07:45.902 Max: 16 00:07:45.903 Min: 16 00:07:45.903 Number of Namespaces: 256 00:07:45.903 Compare Command: Supported 00:07:45.903 Write Uncorrectable Command: Not Supported 00:07:45.903 Dataset Management Command: Supported 00:07:45.903 Write Zeroes Command: Supported 00:07:45.903 Set Features Save Field: Supported 00:07:45.903 Reservations: Not Supported 00:07:45.903 Timestamp: Supported 00:07:45.903 Copy: Supported 00:07:45.903 Volatile Write Cache: Present 00:07:45.903 Atomic Write Unit (Normal): 1 00:07:45.903 Atomic Write Unit (PFail): 1 00:07:45.903 Atomic Compare & Write Unit: 1 00:07:45.903 Fused Compare & Write: Not Supported 00:07:45.903 Scatter-Gather List 00:07:45.903 SGL Command Set: Supported 00:07:45.903 SGL Keyed: Not Supported 00:07:45.903 SGL Bit Bucket Descriptor: Not Supported 00:07:45.903 SGL Metadata Pointer: Not Supported 00:07:45.903 Oversized SGL: Not Supported 00:07:45.903 SGL Metadata Address: Not Supported 00:07:45.903 SGL Offset: Not Supported 00:07:45.903 Transport SGL Data Block: Not Supported 00:07:45.903 Replay Protected Memory Block: Not Supported 00:07:45.903 00:07:45.903 Firmware Slot Information 00:07:45.903 ========================= 00:07:45.903 Active slot: 1 00:07:45.903 Slot 1 Firmware Revision: 1.0 00:07:45.903 00:07:45.903 00:07:45.903 Commands Supported and Effects 00:07:45.903 ============================== 00:07:45.903 Admin Commands 00:07:45.903 -------------- 00:07:45.903 Delete I/O Submission Queue (00h): Supported 00:07:45.903 Create I/O Submission Queue (01h): Supported 00:07:45.903 Get Log Page (02h): Supported 00:07:45.903 Delete I/O Completion Queue (04h): Supported 00:07:45.903 Create I/O Completion Queue (05h): Supported 00:07:45.903 Identify (06h): Supported 00:07:45.903 Abort (08h): Supported 00:07:45.903 Set Features (09h): Supported 00:07:45.903 Get Features (0Ah): Supported 00:07:45.903 Asynchronous Event Request (0Ch): Supported 00:07:45.903 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.903 Directive Send (19h): Supported 00:07:45.903 Directive Receive (1Ah): Supported 00:07:45.903 Virtualization Management (1Ch): Supported 00:07:45.903 Doorbell Buffer Config (7Ch): Supported 00:07:45.903 Format NVM (80h): Supported LBA-Change 00:07:45.903 I/O Commands 00:07:45.903 ------------ 00:07:45.903 Flush (00h): Supported LBA-Change 00:07:45.903 Write (01h): Supported LBA-Change 00:07:45.903 Read (02h): Supported 00:07:45.903 Compare (05h): Supported 00:07:45.903 Write Zeroes (08h): Supported LBA-Change 00:07:45.903 Dataset Management (09h): Supported LBA-Change 00:07:45.903 Unknown (0Ch): Supported 00:07:45.903 Unknown (12h): Supported 00:07:45.903 Copy (19h): Supported LBA-Change 00:07:45.903 Unknown (1Dh): Supported LBA-Change 00:07:45.903 00:07:45.903 Error Log 00:07:45.903 ========= 00:07:45.903 00:07:45.903 Arbitration 00:07:45.903 =========== 00:07:45.903 Arbitration Burst: no limit 00:07:45.903 00:07:45.903 Power Management 00:07:45.903 ================ 00:07:45.903 Number of Power States: 1 00:07:45.903 Current Power State: Power State #0 00:07:45.903 Power State #0: 00:07:45.903 Max Power: 25.00 W 00:07:45.903 Non-Operational State: Operational 00:07:45.903 Entry Latency: 16 microseconds 00:07:45.903 Exit Latency: 4 microseconds 00:07:45.903 Relative Read Throughput: 0 00:07:45.903 Relative Read Latency: 0 00:07:45.903 Relative Write Throughput: 0 00:07:45.903 Relative Write Latency: 0 00:07:45.903 Idle Power: Not Reported 00:07:45.903 Active Power: Not Reported 00:07:45.903 Non-Operational Permissive Mode: Not Supported 00:07:45.903 00:07:45.903 Health Information 00:07:45.903 ================== 00:07:45.903 Critical Warnings: 00:07:45.903 Available Spare Space: OK 00:07:45.903 Temperature: OK 00:07:45.903 Device Reliability: OK 00:07:45.903 Read Only: No 00:07:45.903 Volatile Memory Backup: OK 00:07:45.903 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.903 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.903 Available Spare: 0% 00:07:45.903 Available Spare Threshold: 0% 00:07:45.903 Life Percentage Used: 0% 00:07:45.903 Data Units Read: 813 00:07:45.903 Data Units Written: 742 00:07:45.903 Host Read Commands: 37332 00:07:45.903 Host Write Commands: 36757 00:07:45.903 Controller Busy Time: 0 minutes 00:07:45.903 Power Cycles: 0 00:07:45.903 Power On Hours: 0 hours 00:07:45.903 Unsafe Shutdowns: 0 00:07:45.903 Unrecoverable Media Errors: 0 00:07:45.903 Lifetime Error Log Entries: 0 00:07:45.903 Warning Temperature Time: 0 minutes 00:07:45.903 Critical Temperature Time: 0 minutes 00:07:45.903 00:07:45.903 Number of Queues 00:07:45.903 ================ 00:07:45.903 Number of I/O Submission Queues: 64 00:07:45.903 Number of I/O Completion Queues: 64 00:07:45.903 00:07:45.903 ZNS Specific Controller Data 00:07:45.903 ============================ 00:07:45.903 Zone Append Size Limit: 0 00:07:45.903 00:07:45.903 00:07:45.903 Active Namespaces 00:07:45.903 ================= 00:07:45.903 Namespace ID:1 00:07:45.903 Error Recovery Timeout: Unlimited 00:07:45.903 Command Set Identifier: NVM (00h) 00:07:45.903 Deallocate: Supported 00:07:45.903 Deallocated/Unwritten Error: Supported 00:07:45.903 Deallocated Read Value: All 0x00 00:07:45.903 Deallocate in Write Zeroes: Not Supported 00:07:45.903 Deallocated Guard Field: 0xFFFF 00:07:45.903 Flush: Supported 00:07:45.903 Reservation: Not Supported 00:07:45.903 Namespace Sharing Capabilities: Multiple Controllers 00:07:45.903 Size (in LBAs): 262144 (1GiB) 00:07:45.903 Capacity (in LBAs): 262144 (1GiB) 00:07:45.903 Utilization (in LBAs): 262144 (1GiB) 00:07:45.903 Thin Provisioning: Not Supported 00:07:45.903 Per-NS Atomic Units: No 00:07:45.903 Maximum Single Source Range Length: 128 00:07:45.903 Maximum Copy Length: 128 00:07:45.903 Maximum Source Range Count: 128 00:07:45.903 NGUID/EUI64 Never Reused: No 00:07:45.903 Namespace Write Protected: No 00:07:45.903 Endurance group ID: 1 00:07:45.903 Number of LBA Formats: 8 00:07:45.903 Current LBA Format: LBA Format #04 00:07:45.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.903 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.903 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.903 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.903 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.903 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.903 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.903 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.903 00:07:45.903 Get Feature FDP: 00:07:45.903 ================ 00:07:45.903 Enabled: Yes 00:07:45.903 FDP configuration index: 0 00:07:45.903 00:07:45.903 FDP configurations log page 00:07:45.903 =========================== 00:07:45.903 Number of FDP configurations: 1 00:07:45.903 Version: 0 00:07:45.903 Size: 112 00:07:45.903 FDP Configuration Descriptor: 0 00:07:45.903 Descriptor Size: 96 00:07:45.903 Reclaim Group Identifier format: 2 00:07:45.904 FDP Volatile Write Cache: Not Present 00:07:45.904 FDP Configuration: Valid 00:07:45.904 Vendor Specific Size: 0 00:07:45.904 Number of Reclaim Groups: 2 00:07:45.904 Number of Recalim Unit Handles: 8 00:07:45.904 Max Placement Identifiers: 128 00:07:45.904 Number of Namespaces Suppprted: 256 00:07:45.904 Reclaim unit Nominal Size: 6000000 bytes 00:07:45.904 Estimated Reclaim Unit Time Limit: Not Reported 00:07:45.904 RUH Desc #000: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #001: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #002: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #003: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #004: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #005: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #006: RUH Type: Initially Isolated 00:07:45.904 RUH Desc #007: RUH Type: Initially Isolated 00:07:45.904 00:07:45.904 FDP reclaim unit handle usage log page 00:07:45.904 ====================================== 00:07:45.904 Number of Reclaim Unit Handles: 8 00:07:45.904 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:45.904 RUH Usage Desc #001: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #002: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #003: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #004: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #005: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #006: RUH Attributes: Unused 00:07:45.904 RUH Usage Desc #007: RUH Attributes: Unused 00:07:45.904 00:07:45.904 FDP statistics log page 00:07:45.904 ======================= 00:07:45.904 Host bytes with metadata written: 474259456 00:07:45.904 Media bytes with metadata written: 474304512 00:07:45.904 Media bytes erased: 0 00:07:45.904 00:07:45.904 FDP events log page 00:07:45.904 =================== 00:07:45.904 Number of FDP events: 0 00:07:45.904 00:07:45.904 NVM Specific Namespace Data 00:07:45.904 =========================== 00:07:45.904 Logical Block Storage Tag Mask: 0 00:07:45.904 Protection Information Capabilities: 00:07:45.904 16b Guard Protection Information Storage Tag Support: No 00:07:45.904 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.904 Storage Tag Check Read Support: No 00:07:45.904 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.904 ===================================================== 00:07:45.904 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.904 ===================================================== 00:07:45.904 Controller Capabilities/Features 00:07:45.904 ================================ 00:07:45.904 Vendor ID: 1b36 00:07:45.904 Subsystem Vendor ID: 1af4 00:07:45.904 Serial Number: 12340 00:07:45.904 Model Number: QEMU NVMe Ctrl 00:07:45.904 Firmware Version: 8.0.0 00:07:45.904 Recommended Arb Burst: 6 00:07:45.904 IEEE OUI Identifier: 00 54 52 00:07:45.904 Multi-path I/O 00:07:45.904 May have multiple subsystem ports: No 00:07:45.904 May have multiple controllers: No 00:07:45.904 Associated with SR-IOV VF: No 00:07:45.904 Max Data Transfer Size: 524288 00:07:45.904 Max Number of Namespaces: 256 00:07:45.904 Max Number of I/O Queues: 64 00:07:45.904 NVMe Specification Version (VS): 1.4 00:07:45.904 NVMe Specification Version (Identify): 1.4 00:07:45.904 Maximum Queue Entries: 2048 00:07:45.904 Contiguous Queues Required: Yes 00:07:45.904 Arbitration Mechanisms Supported 00:07:45.904 Weighted Round Robin: Not Supported 00:07:45.904 Vendor Specific: Not Supported 00:07:45.904 Reset Timeout: 7500 ms 00:07:45.904 Doorbell Stride: 4 bytes 00:07:45.904 NVM Subsystem Reset: Not Supported 00:07:45.904 Command Sets Supported 00:07:45.904 NVM Command Set: Supported 00:07:45.904 Boot Partition: Not Supported 00:07:45.904 Memory Page Size Minimum: 4096 bytes 00:07:45.904 Memory Page Size Maximum: 65536 bytes 00:07:45.904 Persistent Memory Region: Not Supported 00:07:45.904 Optional Asynchronous Events Supported 00:07:45.904 Namespace Attribute Notices: Supported 00:07:45.904 Firmware Activation Notices: Not Supported 00:07:45.904 ANA Change Notices: Not Supported 00:07:45.904 PLE Aggregate Log Change Notices: Not Supported 00:07:45.904 LBA Status Info Alert Notices: Not Supported 00:07:45.904 EGE Aggregate Log Change Notices: Not Supported 00:07:45.904 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.904 Zone Descriptor Change Notices: Not Supported 00:07:45.904 Discovery Log Change Notices: Not Supported 00:07:45.904 Controller Attributes 00:07:45.904 128-bit Host Identifier: Not Supported 00:07:45.904 Non-Operational Permissive Mode: Not Supported 00:07:45.904 NVM Sets: Not Supported 00:07:45.904 Read Recovery Levels: Not Supported 00:07:45.904 Endurance Groups: Not Supported 00:07:45.904 Predictable Latency Mode: Not Supported 00:07:45.904 Traffic Based Keep ALive: Not Supported 00:07:45.904 Namespace Granularity: Not Supported 00:07:45.904 SQ Associations: Not Supported 00:07:45.904 UUID List: Not Supported 00:07:45.904 Multi-Domain [2024-11-29 18:21:05.759869] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74743 terminated unexpected 00:07:45.904 [2024-11-29 18:21:05.760920] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74743 terminated unexpected 00:07:45.904 [2024-11-29 18:21:05.762802] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74743 terminated unexpected 00:07:45.904 Subsystem: Not Supported 00:07:45.904 Fixed Capacity Management: Not Supported 00:07:45.904 Variable Capacity Management: Not Supported 00:07:45.904 Delete Endurance Group: Not Supported 00:07:45.904 Delete NVM Set: Not Supported 00:07:45.904 Extended LBA Formats Supported: Supported 00:07:45.904 Flexible Data Placement Supported: Not Supported 00:07:45.904 00:07:45.904 Controller Memory Buffer Support 00:07:45.904 ================================ 00:07:45.904 Supported: No 00:07:45.904 00:07:45.904 Persistent Memory Region Support 00:07:45.904 ================================ 00:07:45.904 Supported: No 00:07:45.904 00:07:45.904 Admin Command Set Attributes 00:07:45.904 ============================ 00:07:45.904 Security Send/Receive: Not Supported 00:07:45.904 Format NVM: Supported 00:07:45.904 Firmware Activate/Download: Not Supported 00:07:45.904 Namespace Management: Supported 00:07:45.904 Device Self-Test: Not Supported 00:07:45.904 Directives: Supported 00:07:45.904 NVMe-MI: Not Supported 00:07:45.904 Virtualization Management: Not Supported 00:07:45.904 Doorbell Buffer Config: Supported 00:07:45.904 Get LBA Status Capability: Not Supported 00:07:45.904 Command & Feature Lockdown Capability: Not Supported 00:07:45.904 Abort Command Limit: 4 00:07:45.904 Async Event Request Limit: 4 00:07:45.904 Number of Firmware Slots: N/A 00:07:45.904 Firmware Slot 1 Read-Only: N/A 00:07:45.904 Firmware Activation Without Reset: N/A 00:07:45.904 Multiple Update Detection Support: N/A 00:07:45.904 Firmware Update Granularity: No Information Provided 00:07:45.904 Per-Namespace SMART Log: Yes 00:07:45.904 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.904 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:45.904 Command Effects Log Page: Supported 00:07:45.904 Get Log Page Extended Data: Supported 00:07:45.904 Telemetry Log Pages: Not Supported 00:07:45.904 Persistent Event Log Pages: Not Supported 00:07:45.905 Supported Log Pages Log Page: May Support 00:07:45.905 Commands Supported & Effects Log Page: Not Supported 00:07:45.905 Feature Identifiers & Effects Log Page:May Support 00:07:45.905 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.905 Data Area 4 for Telemetry Log: Not Supported 00:07:45.905 Error Log Page Entries Supported: 1 00:07:45.905 Keep Alive: Not Supported 00:07:45.905 00:07:45.905 NVM Command Set Attributes 00:07:45.905 ========================== 00:07:45.905 Submission Queue Entry Size 00:07:45.905 Max: 64 00:07:45.905 Min: 64 00:07:45.905 Completion Queue Entry Size 00:07:45.905 Max: 16 00:07:45.905 Min: 16 00:07:45.905 Number of Namespaces: 256 00:07:45.905 Compare Command: Supported 00:07:45.905 Write Uncorrectable Command: Not Supported 00:07:45.905 Dataset Management Command: Supported 00:07:45.905 Write Zeroes Command: Supported 00:07:45.905 Set Features Save Field: Supported 00:07:45.905 Reservations: Not Supported 00:07:45.905 Timestamp: Supported 00:07:45.905 Copy: Supported 00:07:45.905 Volatile Write Cache: Present 00:07:45.905 Atomic Write Unit (Normal): 1 00:07:45.905 Atomic Write Unit (PFail): 1 00:07:45.905 Atomic Compare & Write Unit: 1 00:07:45.905 Fused Compare & Write: Not Supported 00:07:45.905 Scatter-Gather List 00:07:45.905 SGL Command Set: Supported 00:07:45.905 SGL Keyed: Not Supported 00:07:45.905 SGL Bit Bucket Descriptor: Not Supported 00:07:45.905 SGL Metadata Pointer: Not Supported 00:07:45.905 Oversized SGL: Not Supported 00:07:45.905 SGL Metadata Address: Not Supported 00:07:45.905 SGL Offset: Not Supported 00:07:45.905 Transport SGL Data Block: Not Supported 00:07:45.905 Replay Protected Memory Block: Not Supported 00:07:45.905 00:07:45.905 Firmware Slot Information 00:07:45.905 ========================= 00:07:45.905 Active slot: 1 00:07:45.905 Slot 1 Firmware Revision: 1.0 00:07:45.905 00:07:45.905 00:07:45.905 Commands Supported and Effects 00:07:45.905 ============================== 00:07:45.905 Admin Commands 00:07:45.905 -------------- 00:07:45.905 Delete I/O Submission Queue (00h): Supported 00:07:45.905 Create I/O Submission Queue (01h): Supported 00:07:45.905 Get Log Page (02h): Supported 00:07:45.905 Delete I/O Completion Queue (04h): Supported 00:07:45.905 Create I/O Completion Queue (05h): Supported 00:07:45.905 Identify (06h): Supported 00:07:45.905 Abort (08h): Supported 00:07:45.905 Set Features (09h): Supported 00:07:45.905 Get Features (0Ah): Supported 00:07:45.905 Asynchronous Event Request (0Ch): Supported 00:07:45.905 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.905 Directive Send (19h): Supported 00:07:45.905 Directive Receive (1Ah): Supported 00:07:45.905 Virtualization Management (1Ch): Supported 00:07:45.905 Doorbell Buffer Config (7Ch): Supported 00:07:45.905 Format NVM (80h): Supported LBA-Change 00:07:45.905 I/O Commands 00:07:45.905 ------------ 00:07:45.905 Flush (00h): Supported LBA-Change 00:07:45.905 Write (01h): Supported LBA-Change 00:07:45.905 Read (02h): Supported 00:07:45.905 Compare (05h): Supported 00:07:45.905 Write Zeroes (08h): Supported LBA-Change 00:07:45.905 Dataset Management (09h): Supported LBA-Change 00:07:45.905 Unknown (0Ch): Supported 00:07:45.905 Unknown (12h): Supported 00:07:45.905 Copy (19h): Supported LBA-Change 00:07:45.905 Unknown (1Dh): Supported LBA-Change 00:07:45.905 00:07:45.905 Error Log 00:07:45.905 ========= 00:07:45.905 00:07:45.905 Arbitration 00:07:45.905 =========== 00:07:45.905 Arbitration Burst: no limit 00:07:45.905 00:07:45.905 Power Management 00:07:45.905 ================ 00:07:45.905 Number of Power States: 1 00:07:45.905 Current Power State: Power State #0 00:07:45.905 Power State #0: 00:07:45.905 Max Power: 25.00 W 00:07:45.905 Non-Operational State: Operational 00:07:45.905 Entry Latency: 16 microseconds 00:07:45.905 Exit Latency: 4 microseconds 00:07:45.905 Relative Read Throughput: 0 00:07:45.905 Relative Read Latency: 0 00:07:45.905 Relative Write Throughput: 0 00:07:45.905 Relative Write Latency: 0 00:07:45.905 Idle Power: Not Reported 00:07:45.905 Active Power: Not Reported 00:07:45.905 Non-Operational Permissive Mode: Not Supported 00:07:45.905 00:07:45.905 Health Information 00:07:45.905 ================== 00:07:45.905 Critical Warnings: 00:07:45.905 Available Spare Space: OK 00:07:45.905 Temperature: OK 00:07:45.905 Device Reliability: OK 00:07:45.905 Read Only: No 00:07:45.905 Volatile Memory Backup: OK 00:07:45.905 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.905 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.905 Available Spare: 0% 00:07:45.905 Available Spare Threshold: 0% 00:07:45.905 Life Percentage Used: 0% 00:07:45.905 Data Units Read: 650 00:07:45.905 Data Units Written: 578 00:07:45.905 Host Read Commands: 35576 00:07:45.905 Host Write Commands: 35362 00:07:45.905 Controller Busy Time: 0 minutes 00:07:45.905 Power Cycles: 0 00:07:45.905 Power On Hours: 0 hours 00:07:45.905 Unsafe Shutdowns: 0 00:07:45.905 Unrecoverable Media Errors: 0 00:07:45.905 Lifetime Error Log Entries: 0 00:07:45.905 Warning Temperature Time: 0 minutes 00:07:45.905 Critical Temperature Time: 0 minutes 00:07:45.905 00:07:45.905 Number of Queues 00:07:45.905 ================ 00:07:45.905 Number of I/O Submission Queues: 64 00:07:45.905 Number of I/O Completion Queues: 64 00:07:45.905 00:07:45.905 ZNS Specific Controller Data 00:07:45.905 ============================ 00:07:45.905 Zone Append Size Limit: 0 00:07:45.905 00:07:45.905 00:07:45.905 Active Namespaces 00:07:45.905 ================= 00:07:45.905 Namespace ID:1 00:07:45.905 Error Recovery Timeout: Unlimited 00:07:45.905 Command Set Identifier: NVM (00h) 00:07:45.905 Deallocate: Supported 00:07:45.905 Deallocated/Unwritten Error: Supported 00:07:45.905 Deallocated Read Value: All 0x00 00:07:45.905 Deallocate in Write Zeroes: Not Supported 00:07:45.905 Deallocated Guard Field: 0xFFFF 00:07:45.905 Flush: Supported 00:07:45.905 Reservation: Not Supported 00:07:45.905 Metadata Transferred as: Separate Metadata Buffer 00:07:45.905 Namespace Sharing Capabilities: Private 00:07:45.905 Size (in LBAs): 1548666 (5GiB) 00:07:45.905 Capacity (in LBAs): 1548666 (5GiB) 00:07:45.905 Utilization (in LBAs): 1548666 (5GiB) 00:07:45.905 Thin Provisioning: Not Supported 00:07:45.905 Per-NS Atomic Units: No 00:07:45.905 Maximum Single Source Range Length: 128 00:07:45.905 Maximum Copy Length: 128 00:07:45.905 Maximum Source Range Count: 128 00:07:45.905 NGUID/EUI64 Never Reused: No 00:07:45.905 Namespace Write Protected: No 00:07:45.905 Number of LBA Formats: 8 00:07:45.905 Current LBA Format: LBA Format #07 00:07:45.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.905 LBA Forma[2024-11-29 18:21:05.763815] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74743 terminated unexpected 00:07:45.905 t #05: Data Size: 4096 Metadata Size: 8 00:07:45.906 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.906 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.906 00:07:45.906 NVM Specific Namespace Data 00:07:45.906 =========================== 00:07:45.906 Logical Block Storage Tag Mask: 0 00:07:45.906 Protection Information Capabilities: 00:07:45.906 16b Guard Protection Information Storage Tag Support: No 00:07:45.906 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.906 Storage Tag Check Read Support: No 00:07:45.906 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.906 ===================================================== 00:07:45.906 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.906 ===================================================== 00:07:45.906 Controller Capabilities/Features 00:07:45.906 ================================ 00:07:45.906 Vendor ID: 1b36 00:07:45.906 Subsystem Vendor ID: 1af4 00:07:45.906 Serial Number: 12342 00:07:45.906 Model Number: QEMU NVMe Ctrl 00:07:45.906 Firmware Version: 8.0.0 00:07:45.906 Recommended Arb Burst: 6 00:07:45.906 IEEE OUI Identifier: 00 54 52 00:07:45.906 Multi-path I/O 00:07:45.906 May have multiple subsystem ports: No 00:07:45.906 May have multiple controllers: No 00:07:45.906 Associated with SR-IOV VF: No 00:07:45.906 Max Data Transfer Size: 524288 00:07:45.906 Max Number of Namespaces: 256 00:07:45.906 Max Number of I/O Queues: 64 00:07:45.906 NVMe Specification Version (VS): 1.4 00:07:45.906 NVMe Specification Version (Identify): 1.4 00:07:45.906 Maximum Queue Entries: 2048 00:07:45.906 Contiguous Queues Required: Yes 00:07:45.906 Arbitration Mechanisms Supported 00:07:45.906 Weighted Round Robin: Not Supported 00:07:45.906 Vendor Specific: Not Supported 00:07:45.906 Reset Timeout: 7500 ms 00:07:45.906 Doorbell Stride: 4 bytes 00:07:45.906 NVM Subsystem Reset: Not Supported 00:07:45.906 Command Sets Supported 00:07:45.906 NVM Command Set: Supported 00:07:45.906 Boot Partition: Not Supported 00:07:45.906 Memory Page Size Minimum: 4096 bytes 00:07:45.906 Memory Page Size Maximum: 65536 bytes 00:07:45.906 Persistent Memory Region: Not Supported 00:07:45.906 Optional Asynchronous Events Supported 00:07:45.906 Namespace Attribute Notices: Supported 00:07:45.906 Firmware Activation Notices: Not Supported 00:07:45.906 ANA Change Notices: Not Supported 00:07:45.906 PLE Aggregate Log Change Notices: Not Supported 00:07:45.906 LBA Status Info Alert Notices: Not Supported 00:07:45.906 EGE Aggregate Log Change Notices: Not Supported 00:07:45.906 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.906 Zone Descriptor Change Notices: Not Supported 00:07:45.906 Discovery Log Change Notices: Not Supported 00:07:45.906 Controller Attributes 00:07:45.906 128-bit Host Identifier: Not Supported 00:07:45.906 Non-Operational Permissive Mode: Not Supported 00:07:45.906 NVM Sets: Not Supported 00:07:45.906 Read Recovery Levels: Not Supported 00:07:45.906 Endurance Groups: Not Supported 00:07:45.906 Predictable Latency Mode: Not Supported 00:07:45.906 Traffic Based Keep ALive: Not Supported 00:07:45.906 Namespace Granularity: Not Supported 00:07:45.906 SQ Associations: Not Supported 00:07:45.906 UUID List: Not Supported 00:07:45.906 Multi-Domain Subsystem: Not Supported 00:07:45.906 Fixed Capacity Management: Not Supported 00:07:45.906 Variable Capacity Management: Not Supported 00:07:45.906 Delete Endurance Group: Not Supported 00:07:45.906 Delete NVM Set: Not Supported 00:07:45.906 Extended LBA Formats Supported: Supported 00:07:45.906 Flexible Data Placement Supported: Not Supported 00:07:45.906 00:07:45.906 Controller Memory Buffer Support 00:07:45.906 ================================ 00:07:45.906 Supported: No 00:07:45.906 00:07:45.906 Persistent Memory Region Support 00:07:45.906 ================================ 00:07:45.906 Supported: No 00:07:45.906 00:07:45.906 Admin Command Set Attributes 00:07:45.906 ============================ 00:07:45.906 Security Send/Receive: Not Supported 00:07:45.906 Format NVM: Supported 00:07:45.906 Firmware Activate/Download: Not Supported 00:07:45.906 Namespace Management: Supported 00:07:45.906 Device Self-Test: Not Supported 00:07:45.906 Directives: Supported 00:07:45.906 NVMe-MI: Not Supported 00:07:45.906 Virtualization Management: Not Supported 00:07:45.906 Doorbell Buffer Config: Supported 00:07:45.906 Get LBA Status Capability: Not Supported 00:07:45.906 Command & Feature Lockdown Capability: Not Supported 00:07:45.906 Abort Command Limit: 4 00:07:45.906 Async Event Request Limit: 4 00:07:45.906 Number of Firmware Slots: N/A 00:07:45.906 Firmware Slot 1 Read-Only: N/A 00:07:45.906 Firmware Activation Without Reset: N/A 00:07:45.906 Multiple Update Detection Support: N/A 00:07:45.906 Firmware Update Granularity: No Information Provided 00:07:45.906 Per-Namespace SMART Log: Yes 00:07:45.906 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.906 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:45.906 Command Effects Log Page: Supported 00:07:45.906 Get Log Page Extended Data: Supported 00:07:45.906 Telemetry Log Pages: Not Supported 00:07:45.906 Persistent Event Log Pages: Not Supported 00:07:45.906 Supported Log Pages Log Page: May Support 00:07:45.906 Commands Supported & Effects Log Page: Not Supported 00:07:45.906 Feature Identifiers & Effects Log Page:May Support 00:07:45.906 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.906 Data Area 4 for Telemetry Log: Not Supported 00:07:45.906 Error Log Page Entries Supported: 1 00:07:45.906 Keep Alive: Not Supported 00:07:45.906 00:07:45.906 NVM Command Set Attributes 00:07:45.906 ========================== 00:07:45.906 Submission Queue Entry Size 00:07:45.906 Max: 64 00:07:45.906 Min: 64 00:07:45.907 Completion Queue Entry Size 00:07:45.907 Max: 16 00:07:45.907 Min: 16 00:07:45.907 Number of Namespaces: 256 00:07:45.907 Compare Command: Supported 00:07:45.907 Write Uncorrectable Command: Not Supported 00:07:45.907 Dataset Management Command: Supported 00:07:45.907 Write Zeroes Command: Supported 00:07:45.907 Set Features Save Field: Supported 00:07:45.907 Reservations: Not Supported 00:07:45.907 Timestamp: Supported 00:07:45.907 Copy: Supported 00:07:45.907 Volatile Write Cache: Present 00:07:45.907 Atomic Write Unit (Normal): 1 00:07:45.907 Atomic Write Unit (PFail): 1 00:07:45.907 Atomic Compare & Write Unit: 1 00:07:45.907 Fused Compare & Write: Not Supported 00:07:45.907 Scatter-Gather List 00:07:45.907 SGL Command Set: Supported 00:07:45.907 SGL Keyed: Not Supported 00:07:45.907 SGL Bit Bucket Descriptor: Not Supported 00:07:45.907 SGL Metadata Pointer: Not Supported 00:07:45.907 Oversized SGL: Not Supported 00:07:45.907 SGL Metadata Address: Not Supported 00:07:45.907 SGL Offset: Not Supported 00:07:45.907 Transport SGL Data Block: Not Supported 00:07:45.907 Replay Protected Memory Block: Not Supported 00:07:45.907 00:07:45.907 Firmware Slot Information 00:07:45.907 ========================= 00:07:45.907 Active slot: 1 00:07:45.907 Slot 1 Firmware Revision: 1.0 00:07:45.907 00:07:45.907 00:07:45.907 Commands Supported and Effects 00:07:45.907 ============================== 00:07:45.907 Admin Commands 00:07:45.907 -------------- 00:07:45.907 Delete I/O Submission Queue (00h): Supported 00:07:45.907 Create I/O Submission Queue (01h): Supported 00:07:45.907 Get Log Page (02h): Supported 00:07:45.907 Delete I/O Completion Queue (04h): Supported 00:07:45.907 Create I/O Completion Queue (05h): Supported 00:07:45.907 Identify (06h): Supported 00:07:45.907 Abort (08h): Supported 00:07:45.907 Set Features (09h): Supported 00:07:45.907 Get Features (0Ah): Supported 00:07:45.907 Asynchronous Event Request (0Ch): Supported 00:07:45.907 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.907 Directive Send (19h): Supported 00:07:45.907 Directive Receive (1Ah): Supported 00:07:45.907 Virtualization Management (1Ch): Supported 00:07:45.907 Doorbell Buffer Config (7Ch): Supported 00:07:45.907 Format NVM (80h): Supported LBA-Change 00:07:45.907 I/O Commands 00:07:45.907 ------------ 00:07:45.907 Flush (00h): Supported LBA-Change 00:07:45.907 Write (01h): Supported LBA-Change 00:07:45.907 Read (02h): Supported 00:07:45.907 Compare (05h): Supported 00:07:45.907 Write Zeroes (08h): Supported LBA-Change 00:07:45.907 Dataset Management (09h): Supported LBA-Change 00:07:45.907 Unknown (0Ch): Supported 00:07:45.907 Unknown (12h): Supported 00:07:45.907 Copy (19h): Supported LBA-Change 00:07:45.907 Unknown (1Dh): Supported LBA-Change 00:07:45.907 00:07:45.907 Error Log 00:07:45.907 ========= 00:07:45.907 00:07:45.907 Arbitration 00:07:45.907 =========== 00:07:45.907 Arbitration Burst: no limit 00:07:45.907 00:07:45.907 Power Management 00:07:45.907 ================ 00:07:45.907 Number of Power States: 1 00:07:45.907 Current Power State: Power State #0 00:07:45.907 Power State #0: 00:07:45.907 Max Power: 25.00 W 00:07:45.907 Non-Operational State: Operational 00:07:45.907 Entry Latency: 16 microseconds 00:07:45.907 Exit Latency: 4 microseconds 00:07:45.907 Relative Read Throughput: 0 00:07:45.907 Relative Read Latency: 0 00:07:45.907 Relative Write Throughput: 0 00:07:45.907 Relative Write Latency: 0 00:07:45.907 Idle Power: Not Reported 00:07:45.907 Active Power: Not Reported 00:07:45.907 Non-Operational Permissive Mode: Not Supported 00:07:45.907 00:07:45.907 Health Information 00:07:45.907 ================== 00:07:45.907 Critical Warnings: 00:07:45.907 Available Spare Space: OK 00:07:45.907 Temperature: OK 00:07:45.907 Device Reliability: OK 00:07:45.907 Read Only: No 00:07:45.907 Volatile Memory Backup: OK 00:07:45.907 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.907 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.907 Available Spare: 0% 00:07:45.907 Available Spare Threshold: 0% 00:07:45.907 Life Percentage Used: 0% 00:07:45.907 Data Units Read: 2086 00:07:45.907 Data Units Written: 1873 00:07:45.907 Host Read Commands: 108938 00:07:45.907 Host Write Commands: 107207 00:07:45.907 Controller Busy Time: 0 minutes 00:07:45.907 Power Cycles: 0 00:07:45.907 Power On Hours: 0 hours 00:07:45.907 Unsafe Shutdowns: 0 00:07:45.907 Unrecoverable Media Errors: 0 00:07:45.907 Lifetime Error Log Entries: 0 00:07:45.907 Warning Temperature Time: 0 minutes 00:07:45.907 Critical Temperature Time: 0 minutes 00:07:45.907 00:07:45.907 Number of Queues 00:07:45.907 ================ 00:07:45.907 Number of I/O Submission Queues: 64 00:07:45.907 Number of I/O Completion Queues: 64 00:07:45.907 00:07:45.907 ZNS Specific Controller Data 00:07:45.907 ============================ 00:07:45.907 Zone Append Size Limit: 0 00:07:45.907 00:07:45.907 00:07:45.907 Active Namespaces 00:07:45.907 ================= 00:07:45.907 Namespace ID:1 00:07:45.907 Error Recovery Timeout: Unlimited 00:07:45.907 Command Set Identifier: NVM (00h) 00:07:45.907 Deallocate: Supported 00:07:45.907 Deallocated/Unwritten Error: Supported 00:07:45.907 Deallocated Read Value: All 0x00 00:07:45.907 Deallocate in Write Zeroes: Not Supported 00:07:45.907 Deallocated Guard Field: 0xFFFF 00:07:45.907 Flush: Supported 00:07:45.907 Reservation: Not Supported 00:07:45.907 Namespace Sharing Capabilities: Private 00:07:45.907 Size (in LBAs): 1048576 (4GiB) 00:07:45.907 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.907 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.907 Thin Provisioning: Not Supported 00:07:45.907 Per-NS Atomic Units: No 00:07:45.907 Maximum Single Source Range Length: 128 00:07:45.907 Maximum Copy Length: 128 00:07:45.907 Maximum Source Range Count: 128 00:07:45.907 NGUID/EUI64 Never Reused: No 00:07:45.907 Namespace Write Protected: No 00:07:45.907 Number of LBA Formats: 8 00:07:45.907 Current LBA Format: LBA Format #04 00:07:45.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.907 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.907 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.907 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.907 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.907 00:07:45.907 NVM Specific Namespace Data 00:07:45.907 =========================== 00:07:45.907 Logical Block Storage Tag Mask: 0 00:07:45.907 Protection Information Capabilities: 00:07:45.907 16b Guard Protection Information Storage Tag Support: No 00:07:45.907 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.907 Storage Tag Check Read Support: No 00:07:45.907 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.907 Namespace ID:2 00:07:45.907 Error Recovery Timeout: Unlimited 00:07:45.907 Command Set Identifier: NVM (00h) 00:07:45.907 Deallocate: Supported 00:07:45.907 Deallocated/Unwritten Error: Supported 00:07:45.907 Deallocated Read Value: All 0x00 00:07:45.908 Deallocate in Write Zeroes: Not Supported 00:07:45.908 Deallocated Guard Field: 0xFFFF 00:07:45.908 Flush: Supported 00:07:45.908 Reservation: Not Supported 00:07:45.908 Namespace Sharing Capabilities: Private 00:07:45.908 Size (in LBAs): 1048576 (4GiB) 00:07:45.908 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.908 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.908 Thin Provisioning: Not Supported 00:07:45.908 Per-NS Atomic Units: No 00:07:45.908 Maximum Single Source Range Length: 128 00:07:45.908 Maximum Copy Length: 128 00:07:45.908 Maximum Source Range Count: 128 00:07:45.908 NGUID/EUI64 Never Reused: No 00:07:45.908 Namespace Write Protected: No 00:07:45.908 Number of LBA Formats: 8 00:07:45.908 Current LBA Format: LBA Format #04 00:07:45.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.908 00:07:45.908 NVM Specific Namespace Data 00:07:45.908 =========================== 00:07:45.908 Logical Block Storage Tag Mask: 0 00:07:45.908 Protection Information Capabilities: 00:07:45.908 16b Guard Protection Information Storage Tag Support: No 00:07:45.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.908 Storage Tag Check Read Support: No 00:07:45.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Namespace ID:3 00:07:45.908 Error Recovery Timeout: Unlimited 00:07:45.908 Command Set Identifier: NVM (00h) 00:07:45.908 Deallocate: Supported 00:07:45.908 Deallocated/Unwritten Error: Supported 00:07:45.908 Deallocated Read Value: All 0x00 00:07:45.908 Deallocate in Write Zeroes: Not Supported 00:07:45.908 Deallocated Guard Field: 0xFFFF 00:07:45.908 Flush: Supported 00:07:45.908 Reservation: Not Supported 00:07:45.908 Namespace Sharing Capabilities: Private 00:07:45.908 Size (in LBAs): 1048576 (4GiB) 00:07:45.908 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.908 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.908 Thin Provisioning: Not Supported 00:07:45.908 Per-NS Atomic Units: No 00:07:45.908 Maximum Single Source Range Length: 128 00:07:45.908 Maximum Copy Length: 128 00:07:45.908 Maximum Source Range Count: 128 00:07:45.908 NGUID/EUI64 Never Reused: No 00:07:45.908 Namespace Write Protected: No 00:07:45.908 Number of LBA Formats: 8 00:07:45.908 Current LBA Format: LBA Format #04 00:07:45.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.908 00:07:45.908 NVM Specific Namespace Data 00:07:45.908 =========================== 00:07:45.908 Logical Block Storage Tag Mask: 0 00:07:45.908 Protection Information Capabilities: 00:07:45.908 16b Guard Protection Information Storage Tag Support: No 00:07:45.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.908 Storage Tag Check Read Support: No 00:07:45.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.178 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.178 18:21:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:46.178 ===================================================== 00:07:46.178 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.178 ===================================================== 00:07:46.178 Controller Capabilities/Features 00:07:46.178 ================================ 00:07:46.178 Vendor ID: 1b36 00:07:46.178 Subsystem Vendor ID: 1af4 00:07:46.178 Serial Number: 12340 00:07:46.178 Model Number: QEMU NVMe Ctrl 00:07:46.178 Firmware Version: 8.0.0 00:07:46.178 Recommended Arb Burst: 6 00:07:46.178 IEEE OUI Identifier: 00 54 52 00:07:46.178 Multi-path I/O 00:07:46.178 May have multiple subsystem ports: No 00:07:46.178 May have multiple controllers: No 00:07:46.178 Associated with SR-IOV VF: No 00:07:46.178 Max Data Transfer Size: 524288 00:07:46.178 Max Number of Namespaces: 256 00:07:46.178 Max Number of I/O Queues: 64 00:07:46.178 NVMe Specification Version (VS): 1.4 00:07:46.178 NVMe Specification Version (Identify): 1.4 00:07:46.178 Maximum Queue Entries: 2048 00:07:46.178 Contiguous Queues Required: Yes 00:07:46.178 Arbitration Mechanisms Supported 00:07:46.178 Weighted Round Robin: Not Supported 00:07:46.178 Vendor Specific: Not Supported 00:07:46.178 Reset Timeout: 7500 ms 00:07:46.178 Doorbell Stride: 4 bytes 00:07:46.178 NVM Subsystem Reset: Not Supported 00:07:46.178 Command Sets Supported 00:07:46.178 NVM Command Set: Supported 00:07:46.178 Boot Partition: Not Supported 00:07:46.178 Memory Page Size Minimum: 4096 bytes 00:07:46.178 Memory Page Size Maximum: 65536 bytes 00:07:46.178 Persistent Memory Region: Not Supported 00:07:46.178 Optional Asynchronous Events Supported 00:07:46.178 Namespace Attribute Notices: Supported 00:07:46.178 Firmware Activation Notices: Not Supported 00:07:46.178 ANA Change Notices: Not Supported 00:07:46.178 PLE Aggregate Log Change Notices: Not Supported 00:07:46.178 LBA Status Info Alert Notices: Not Supported 00:07:46.178 EGE Aggregate Log Change Notices: Not Supported 00:07:46.178 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.179 Zone Descriptor Change Notices: Not Supported 00:07:46.179 Discovery Log Change Notices: Not Supported 00:07:46.179 Controller Attributes 00:07:46.179 128-bit Host Identifier: Not Supported 00:07:46.179 Non-Operational Permissive Mode: Not Supported 00:07:46.179 NVM Sets: Not Supported 00:07:46.179 Read Recovery Levels: Not Supported 00:07:46.179 Endurance Groups: Not Supported 00:07:46.179 Predictable Latency Mode: Not Supported 00:07:46.179 Traffic Based Keep ALive: Not Supported 00:07:46.179 Namespace Granularity: Not Supported 00:07:46.179 SQ Associations: Not Supported 00:07:46.179 UUID List: Not Supported 00:07:46.179 Multi-Domain Subsystem: Not Supported 00:07:46.179 Fixed Capacity Management: Not Supported 00:07:46.179 Variable Capacity Management: Not Supported 00:07:46.179 Delete Endurance Group: Not Supported 00:07:46.179 Delete NVM Set: Not Supported 00:07:46.179 Extended LBA Formats Supported: Supported 00:07:46.179 Flexible Data Placement Supported: Not Supported 00:07:46.179 00:07:46.179 Controller Memory Buffer Support 00:07:46.179 ================================ 00:07:46.179 Supported: No 00:07:46.179 00:07:46.179 Persistent Memory Region Support 00:07:46.179 ================================ 00:07:46.179 Supported: No 00:07:46.179 00:07:46.179 Admin Command Set Attributes 00:07:46.179 ============================ 00:07:46.179 Security Send/Receive: Not Supported 00:07:46.179 Format NVM: Supported 00:07:46.179 Firmware Activate/Download: Not Supported 00:07:46.179 Namespace Management: Supported 00:07:46.179 Device Self-Test: Not Supported 00:07:46.179 Directives: Supported 00:07:46.179 NVMe-MI: Not Supported 00:07:46.179 Virtualization Management: Not Supported 00:07:46.179 Doorbell Buffer Config: Supported 00:07:46.179 Get LBA Status Capability: Not Supported 00:07:46.179 Command & Feature Lockdown Capability: Not Supported 00:07:46.179 Abort Command Limit: 4 00:07:46.179 Async Event Request Limit: 4 00:07:46.179 Number of Firmware Slots: N/A 00:07:46.179 Firmware Slot 1 Read-Only: N/A 00:07:46.179 Firmware Activation Without Reset: N/A 00:07:46.179 Multiple Update Detection Support: N/A 00:07:46.179 Firmware Update Granularity: No Information Provided 00:07:46.179 Per-Namespace SMART Log: Yes 00:07:46.179 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.179 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:46.179 Command Effects Log Page: Supported 00:07:46.179 Get Log Page Extended Data: Supported 00:07:46.179 Telemetry Log Pages: Not Supported 00:07:46.179 Persistent Event Log Pages: Not Supported 00:07:46.179 Supported Log Pages Log Page: May Support 00:07:46.179 Commands Supported & Effects Log Page: Not Supported 00:07:46.179 Feature Identifiers & Effects Log Page:May Support 00:07:46.179 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.179 Data Area 4 for Telemetry Log: Not Supported 00:07:46.179 Error Log Page Entries Supported: 1 00:07:46.179 Keep Alive: Not Supported 00:07:46.179 00:07:46.179 NVM Command Set Attributes 00:07:46.179 ========================== 00:07:46.179 Submission Queue Entry Size 00:07:46.179 Max: 64 00:07:46.179 Min: 64 00:07:46.179 Completion Queue Entry Size 00:07:46.179 Max: 16 00:07:46.179 Min: 16 00:07:46.179 Number of Namespaces: 256 00:07:46.179 Compare Command: Supported 00:07:46.179 Write Uncorrectable Command: Not Supported 00:07:46.179 Dataset Management Command: Supported 00:07:46.179 Write Zeroes Command: Supported 00:07:46.179 Set Features Save Field: Supported 00:07:46.179 Reservations: Not Supported 00:07:46.179 Timestamp: Supported 00:07:46.179 Copy: Supported 00:07:46.179 Volatile Write Cache: Present 00:07:46.179 Atomic Write Unit (Normal): 1 00:07:46.179 Atomic Write Unit (PFail): 1 00:07:46.179 Atomic Compare & Write Unit: 1 00:07:46.179 Fused Compare & Write: Not Supported 00:07:46.179 Scatter-Gather List 00:07:46.179 SGL Command Set: Supported 00:07:46.179 SGL Keyed: Not Supported 00:07:46.179 SGL Bit Bucket Descriptor: Not Supported 00:07:46.179 SGL Metadata Pointer: Not Supported 00:07:46.179 Oversized SGL: Not Supported 00:07:46.179 SGL Metadata Address: Not Supported 00:07:46.179 SGL Offset: Not Supported 00:07:46.179 Transport SGL Data Block: Not Supported 00:07:46.179 Replay Protected Memory Block: Not Supported 00:07:46.179 00:07:46.179 Firmware Slot Information 00:07:46.179 ========================= 00:07:46.179 Active slot: 1 00:07:46.179 Slot 1 Firmware Revision: 1.0 00:07:46.179 00:07:46.179 00:07:46.179 Commands Supported and Effects 00:07:46.179 ============================== 00:07:46.179 Admin Commands 00:07:46.179 -------------- 00:07:46.179 Delete I/O Submission Queue (00h): Supported 00:07:46.179 Create I/O Submission Queue (01h): Supported 00:07:46.179 Get Log Page (02h): Supported 00:07:46.179 Delete I/O Completion Queue (04h): Supported 00:07:46.179 Create I/O Completion Queue (05h): Supported 00:07:46.179 Identify (06h): Supported 00:07:46.179 Abort (08h): Supported 00:07:46.179 Set Features (09h): Supported 00:07:46.179 Get Features (0Ah): Supported 00:07:46.179 Asynchronous Event Request (0Ch): Supported 00:07:46.179 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.179 Directive Send (19h): Supported 00:07:46.179 Directive Receive (1Ah): Supported 00:07:46.179 Virtualization Management (1Ch): Supported 00:07:46.179 Doorbell Buffer Config (7Ch): Supported 00:07:46.179 Format NVM (80h): Supported LBA-Change 00:07:46.179 I/O Commands 00:07:46.179 ------------ 00:07:46.179 Flush (00h): Supported LBA-Change 00:07:46.179 Write (01h): Supported LBA-Change 00:07:46.179 Read (02h): Supported 00:07:46.179 Compare (05h): Supported 00:07:46.179 Write Zeroes (08h): Supported LBA-Change 00:07:46.179 Dataset Management (09h): Supported LBA-Change 00:07:46.179 Unknown (0Ch): Supported 00:07:46.179 Unknown (12h): Supported 00:07:46.179 Copy (19h): Supported LBA-Change 00:07:46.179 Unknown (1Dh): Supported LBA-Change 00:07:46.179 00:07:46.179 Error Log 00:07:46.179 ========= 00:07:46.179 00:07:46.179 Arbitration 00:07:46.179 =========== 00:07:46.179 Arbitration Burst: no limit 00:07:46.179 00:07:46.179 Power Management 00:07:46.179 ================ 00:07:46.179 Number of Power States: 1 00:07:46.179 Current Power State: Power State #0 00:07:46.179 Power State #0: 00:07:46.179 Max Power: 25.00 W 00:07:46.179 Non-Operational State: Operational 00:07:46.179 Entry Latency: 16 microseconds 00:07:46.179 Exit Latency: 4 microseconds 00:07:46.179 Relative Read Throughput: 0 00:07:46.179 Relative Read Latency: 0 00:07:46.179 Relative Write Throughput: 0 00:07:46.179 Relative Write Latency: 0 00:07:46.179 Idle Power: Not Reported 00:07:46.179 Active Power: Not Reported 00:07:46.179 Non-Operational Permissive Mode: Not Supported 00:07:46.179 00:07:46.179 Health Information 00:07:46.179 ================== 00:07:46.179 Critical Warnings: 00:07:46.179 Available Spare Space: OK 00:07:46.179 Temperature: OK 00:07:46.179 Device Reliability: OK 00:07:46.179 Read Only: No 00:07:46.179 Volatile Memory Backup: OK 00:07:46.179 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.179 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.179 Available Spare: 0% 00:07:46.179 Available Spare Threshold: 0% 00:07:46.179 Life Percentage Used: 0% 00:07:46.179 Data Units Read: 650 00:07:46.179 Data Units Written: 578 00:07:46.179 Host Read Commands: 35576 00:07:46.179 Host Write Commands: 35362 00:07:46.179 Controller Busy Time: 0 minutes 00:07:46.179 Power Cycles: 0 00:07:46.179 Power On Hours: 0 hours 00:07:46.179 Unsafe Shutdowns: 0 00:07:46.179 Unrecoverable Media Errors: 0 00:07:46.179 Lifetime Error Log Entries: 0 00:07:46.179 Warning Temperature Time: 0 minutes 00:07:46.179 Critical Temperature Time: 0 minutes 00:07:46.179 00:07:46.179 Number of Queues 00:07:46.179 ================ 00:07:46.179 Number of I/O Submission Queues: 64 00:07:46.179 Number of I/O Completion Queues: 64 00:07:46.179 00:07:46.179 ZNS Specific Controller Data 00:07:46.179 ============================ 00:07:46.179 Zone Append Size Limit: 0 00:07:46.179 00:07:46.179 00:07:46.179 Active Namespaces 00:07:46.179 ================= 00:07:46.179 Namespace ID:1 00:07:46.179 Error Recovery Timeout: Unlimited 00:07:46.179 Command Set Identifier: NVM (00h) 00:07:46.179 Deallocate: Supported 00:07:46.179 Deallocated/Unwritten Error: Supported 00:07:46.179 Deallocated Read Value: All 0x00 00:07:46.179 Deallocate in Write Zeroes: Not Supported 00:07:46.179 Deallocated Guard Field: 0xFFFF 00:07:46.179 Flush: Supported 00:07:46.179 Reservation: Not Supported 00:07:46.179 Metadata Transferred as: Separate Metadata Buffer 00:07:46.179 Namespace Sharing Capabilities: Private 00:07:46.180 Size (in LBAs): 1548666 (5GiB) 00:07:46.180 Capacity (in LBAs): 1548666 (5GiB) 00:07:46.180 Utilization (in LBAs): 1548666 (5GiB) 00:07:46.180 Thin Provisioning: Not Supported 00:07:46.180 Per-NS Atomic Units: No 00:07:46.180 Maximum Single Source Range Length: 128 00:07:46.180 Maximum Copy Length: 128 00:07:46.180 Maximum Source Range Count: 128 00:07:46.180 NGUID/EUI64 Never Reused: No 00:07:46.180 Namespace Write Protected: No 00:07:46.180 Number of LBA Formats: 8 00:07:46.180 Current LBA Format: LBA Format #07 00:07:46.180 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.180 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.180 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.180 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.180 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.180 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.180 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.180 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.180 00:07:46.180 NVM Specific Namespace Data 00:07:46.180 =========================== 00:07:46.180 Logical Block Storage Tag Mask: 0 00:07:46.180 Protection Information Capabilities: 00:07:46.180 16b Guard Protection Information Storage Tag Support: No 00:07:46.180 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.180 Storage Tag Check Read Support: No 00:07:46.180 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.180 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.180 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:46.439 ===================================================== 00:07:46.439 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.439 ===================================================== 00:07:46.439 Controller Capabilities/Features 00:07:46.439 ================================ 00:07:46.439 Vendor ID: 1b36 00:07:46.439 Subsystem Vendor ID: 1af4 00:07:46.439 Serial Number: 12341 00:07:46.439 Model Number: QEMU NVMe Ctrl 00:07:46.439 Firmware Version: 8.0.0 00:07:46.439 Recommended Arb Burst: 6 00:07:46.439 IEEE OUI Identifier: 00 54 52 00:07:46.439 Multi-path I/O 00:07:46.439 May have multiple subsystem ports: No 00:07:46.439 May have multiple controllers: No 00:07:46.439 Associated with SR-IOV VF: No 00:07:46.439 Max Data Transfer Size: 524288 00:07:46.439 Max Number of Namespaces: 256 00:07:46.439 Max Number of I/O Queues: 64 00:07:46.439 NVMe Specification Version (VS): 1.4 00:07:46.439 NVMe Specification Version (Identify): 1.4 00:07:46.439 Maximum Queue Entries: 2048 00:07:46.439 Contiguous Queues Required: Yes 00:07:46.439 Arbitration Mechanisms Supported 00:07:46.439 Weighted Round Robin: Not Supported 00:07:46.439 Vendor Specific: Not Supported 00:07:46.439 Reset Timeout: 7500 ms 00:07:46.439 Doorbell Stride: 4 bytes 00:07:46.439 NVM Subsystem Reset: Not Supported 00:07:46.439 Command Sets Supported 00:07:46.439 NVM Command Set: Supported 00:07:46.439 Boot Partition: Not Supported 00:07:46.439 Memory Page Size Minimum: 4096 bytes 00:07:46.439 Memory Page Size Maximum: 65536 bytes 00:07:46.439 Persistent Memory Region: Not Supported 00:07:46.439 Optional Asynchronous Events Supported 00:07:46.439 Namespace Attribute Notices: Supported 00:07:46.439 Firmware Activation Notices: Not Supported 00:07:46.439 ANA Change Notices: Not Supported 00:07:46.439 PLE Aggregate Log Change Notices: Not Supported 00:07:46.439 LBA Status Info Alert Notices: Not Supported 00:07:46.439 EGE Aggregate Log Change Notices: Not Supported 00:07:46.439 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.439 Zone Descriptor Change Notices: Not Supported 00:07:46.439 Discovery Log Change Notices: Not Supported 00:07:46.439 Controller Attributes 00:07:46.440 128-bit Host Identifier: Not Supported 00:07:46.440 Non-Operational Permissive Mode: Not Supported 00:07:46.440 NVM Sets: Not Supported 00:07:46.440 Read Recovery Levels: Not Supported 00:07:46.440 Endurance Groups: Not Supported 00:07:46.440 Predictable Latency Mode: Not Supported 00:07:46.440 Traffic Based Keep ALive: Not Supported 00:07:46.440 Namespace Granularity: Not Supported 00:07:46.440 SQ Associations: Not Supported 00:07:46.440 UUID List: Not Supported 00:07:46.440 Multi-Domain Subsystem: Not Supported 00:07:46.440 Fixed Capacity Management: Not Supported 00:07:46.440 Variable Capacity Management: Not Supported 00:07:46.440 Delete Endurance Group: Not Supported 00:07:46.440 Delete NVM Set: Not Supported 00:07:46.440 Extended LBA Formats Supported: Supported 00:07:46.440 Flexible Data Placement Supported: Not Supported 00:07:46.440 00:07:46.440 Controller Memory Buffer Support 00:07:46.440 ================================ 00:07:46.440 Supported: No 00:07:46.440 00:07:46.440 Persistent Memory Region Support 00:07:46.440 ================================ 00:07:46.440 Supported: No 00:07:46.440 00:07:46.440 Admin Command Set Attributes 00:07:46.440 ============================ 00:07:46.440 Security Send/Receive: Not Supported 00:07:46.440 Format NVM: Supported 00:07:46.440 Firmware Activate/Download: Not Supported 00:07:46.440 Namespace Management: Supported 00:07:46.440 Device Self-Test: Not Supported 00:07:46.440 Directives: Supported 00:07:46.440 NVMe-MI: Not Supported 00:07:46.440 Virtualization Management: Not Supported 00:07:46.440 Doorbell Buffer Config: Supported 00:07:46.440 Get LBA Status Capability: Not Supported 00:07:46.440 Command & Feature Lockdown Capability: Not Supported 00:07:46.440 Abort Command Limit: 4 00:07:46.440 Async Event Request Limit: 4 00:07:46.440 Number of Firmware Slots: N/A 00:07:46.440 Firmware Slot 1 Read-Only: N/A 00:07:46.440 Firmware Activation Without Reset: N/A 00:07:46.440 Multiple Update Detection Support: N/A 00:07:46.440 Firmware Update Granularity: No Information Provided 00:07:46.440 Per-Namespace SMART Log: Yes 00:07:46.440 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.440 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:46.440 Command Effects Log Page: Supported 00:07:46.440 Get Log Page Extended Data: Supported 00:07:46.440 Telemetry Log Pages: Not Supported 00:07:46.440 Persistent Event Log Pages: Not Supported 00:07:46.440 Supported Log Pages Log Page: May Support 00:07:46.440 Commands Supported & Effects Log Page: Not Supported 00:07:46.440 Feature Identifiers & Effects Log Page:May Support 00:07:46.440 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.440 Data Area 4 for Telemetry Log: Not Supported 00:07:46.440 Error Log Page Entries Supported: 1 00:07:46.440 Keep Alive: Not Supported 00:07:46.440 00:07:46.440 NVM Command Set Attributes 00:07:46.440 ========================== 00:07:46.440 Submission Queue Entry Size 00:07:46.440 Max: 64 00:07:46.440 Min: 64 00:07:46.440 Completion Queue Entry Size 00:07:46.440 Max: 16 00:07:46.440 Min: 16 00:07:46.440 Number of Namespaces: 256 00:07:46.440 Compare Command: Supported 00:07:46.440 Write Uncorrectable Command: Not Supported 00:07:46.440 Dataset Management Command: Supported 00:07:46.440 Write Zeroes Command: Supported 00:07:46.440 Set Features Save Field: Supported 00:07:46.440 Reservations: Not Supported 00:07:46.440 Timestamp: Supported 00:07:46.440 Copy: Supported 00:07:46.440 Volatile Write Cache: Present 00:07:46.440 Atomic Write Unit (Normal): 1 00:07:46.440 Atomic Write Unit (PFail): 1 00:07:46.440 Atomic Compare & Write Unit: 1 00:07:46.440 Fused Compare & Write: Not Supported 00:07:46.440 Scatter-Gather List 00:07:46.440 SGL Command Set: Supported 00:07:46.440 SGL Keyed: Not Supported 00:07:46.440 SGL Bit Bucket Descriptor: Not Supported 00:07:46.440 SGL Metadata Pointer: Not Supported 00:07:46.440 Oversized SGL: Not Supported 00:07:46.440 SGL Metadata Address: Not Supported 00:07:46.440 SGL Offset: Not Supported 00:07:46.440 Transport SGL Data Block: Not Supported 00:07:46.440 Replay Protected Memory Block: Not Supported 00:07:46.440 00:07:46.440 Firmware Slot Information 00:07:46.440 ========================= 00:07:46.440 Active slot: 1 00:07:46.440 Slot 1 Firmware Revision: 1.0 00:07:46.440 00:07:46.440 00:07:46.440 Commands Supported and Effects 00:07:46.440 ============================== 00:07:46.440 Admin Commands 00:07:46.440 -------------- 00:07:46.440 Delete I/O Submission Queue (00h): Supported 00:07:46.440 Create I/O Submission Queue (01h): Supported 00:07:46.440 Get Log Page (02h): Supported 00:07:46.440 Delete I/O Completion Queue (04h): Supported 00:07:46.440 Create I/O Completion Queue (05h): Supported 00:07:46.440 Identify (06h): Supported 00:07:46.440 Abort (08h): Supported 00:07:46.440 Set Features (09h): Supported 00:07:46.440 Get Features (0Ah): Supported 00:07:46.440 Asynchronous Event Request (0Ch): Supported 00:07:46.440 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.440 Directive Send (19h): Supported 00:07:46.440 Directive Receive (1Ah): Supported 00:07:46.440 Virtualization Management (1Ch): Supported 00:07:46.440 Doorbell Buffer Config (7Ch): Supported 00:07:46.440 Format NVM (80h): Supported LBA-Change 00:07:46.440 I/O Commands 00:07:46.440 ------------ 00:07:46.440 Flush (00h): Supported LBA-Change 00:07:46.440 Write (01h): Supported LBA-Change 00:07:46.440 Read (02h): Supported 00:07:46.440 Compare (05h): Supported 00:07:46.440 Write Zeroes (08h): Supported LBA-Change 00:07:46.440 Dataset Management (09h): Supported LBA-Change 00:07:46.440 Unknown (0Ch): Supported 00:07:46.440 Unknown (12h): Supported 00:07:46.440 Copy (19h): Supported LBA-Change 00:07:46.440 Unknown (1Dh): Supported LBA-Change 00:07:46.440 00:07:46.440 Error Log 00:07:46.440 ========= 00:07:46.440 00:07:46.440 Arbitration 00:07:46.440 =========== 00:07:46.440 Arbitration Burst: no limit 00:07:46.440 00:07:46.440 Power Management 00:07:46.440 ================ 00:07:46.440 Number of Power States: 1 00:07:46.440 Current Power State: Power State #0 00:07:46.440 Power State #0: 00:07:46.440 Max Power: 25.00 W 00:07:46.440 Non-Operational State: Operational 00:07:46.440 Entry Latency: 16 microseconds 00:07:46.440 Exit Latency: 4 microseconds 00:07:46.440 Relative Read Throughput: 0 00:07:46.440 Relative Read Latency: 0 00:07:46.440 Relative Write Throughput: 0 00:07:46.440 Relative Write Latency: 0 00:07:46.440 Idle Power: Not Reported 00:07:46.440 Active Power: Not Reported 00:07:46.440 Non-Operational Permissive Mode: Not Supported 00:07:46.440 00:07:46.440 Health Information 00:07:46.440 ================== 00:07:46.440 Critical Warnings: 00:07:46.440 Available Spare Space: OK 00:07:46.440 Temperature: OK 00:07:46.440 Device Reliability: OK 00:07:46.440 Read Only: No 00:07:46.440 Volatile Memory Backup: OK 00:07:46.440 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.440 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.440 Available Spare: 0% 00:07:46.440 Available Spare Threshold: 0% 00:07:46.440 Life Percentage Used: 0% 00:07:46.440 Data Units Read: 1001 00:07:46.440 Data Units Written: 868 00:07:46.440 Host Read Commands: 54148 00:07:46.440 Host Write Commands: 52931 00:07:46.440 Controller Busy Time: 0 minutes 00:07:46.440 Power Cycles: 0 00:07:46.440 Power On Hours: 0 hours 00:07:46.440 Unsafe Shutdowns: 0 00:07:46.440 Unrecoverable Media Errors: 0 00:07:46.440 Lifetime Error Log Entries: 0 00:07:46.440 Warning Temperature Time: 0 minutes 00:07:46.440 Critical Temperature Time: 0 minutes 00:07:46.440 00:07:46.440 Number of Queues 00:07:46.440 ================ 00:07:46.440 Number of I/O Submission Queues: 64 00:07:46.440 Number of I/O Completion Queues: 64 00:07:46.440 00:07:46.440 ZNS Specific Controller Data 00:07:46.440 ============================ 00:07:46.440 Zone Append Size Limit: 0 00:07:46.440 00:07:46.440 00:07:46.440 Active Namespaces 00:07:46.440 ================= 00:07:46.440 Namespace ID:1 00:07:46.440 Error Recovery Timeout: Unlimited 00:07:46.440 Command Set Identifier: NVM (00h) 00:07:46.440 Deallocate: Supported 00:07:46.440 Deallocated/Unwritten Error: Supported 00:07:46.440 Deallocated Read Value: All 0x00 00:07:46.440 Deallocate in Write Zeroes: Not Supported 00:07:46.440 Deallocated Guard Field: 0xFFFF 00:07:46.440 Flush: Supported 00:07:46.440 Reservation: Not Supported 00:07:46.440 Namespace Sharing Capabilities: Private 00:07:46.440 Size (in LBAs): 1310720 (5GiB) 00:07:46.440 Capacity (in LBAs): 1310720 (5GiB) 00:07:46.440 Utilization (in LBAs): 1310720 (5GiB) 00:07:46.440 Thin Provisioning: Not Supported 00:07:46.441 Per-NS Atomic Units: No 00:07:46.441 Maximum Single Source Range Length: 128 00:07:46.441 Maximum Copy Length: 128 00:07:46.441 Maximum Source Range Count: 128 00:07:46.441 NGUID/EUI64 Never Reused: No 00:07:46.441 Namespace Write Protected: No 00:07:46.441 Number of LBA Formats: 8 00:07:46.441 Current LBA Format: LBA Format #04 00:07:46.441 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.441 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.441 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.441 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.441 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.441 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.441 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.441 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.441 00:07:46.441 NVM Specific Namespace Data 00:07:46.441 =========================== 00:07:46.441 Logical Block Storage Tag Mask: 0 00:07:46.441 Protection Information Capabilities: 00:07:46.441 16b Guard Protection Information Storage Tag Support: No 00:07:46.441 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.441 Storage Tag Check Read Support: No 00:07:46.441 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.441 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.441 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:46.700 ===================================================== 00:07:46.701 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.701 ===================================================== 00:07:46.701 Controller Capabilities/Features 00:07:46.701 ================================ 00:07:46.701 Vendor ID: 1b36 00:07:46.701 Subsystem Vendor ID: 1af4 00:07:46.701 Serial Number: 12342 00:07:46.701 Model Number: QEMU NVMe Ctrl 00:07:46.701 Firmware Version: 8.0.0 00:07:46.701 Recommended Arb Burst: 6 00:07:46.701 IEEE OUI Identifier: 00 54 52 00:07:46.701 Multi-path I/O 00:07:46.701 May have multiple subsystem ports: No 00:07:46.701 May have multiple controllers: No 00:07:46.701 Associated with SR-IOV VF: No 00:07:46.701 Max Data Transfer Size: 524288 00:07:46.701 Max Number of Namespaces: 256 00:07:46.701 Max Number of I/O Queues: 64 00:07:46.701 NVMe Specification Version (VS): 1.4 00:07:46.701 NVMe Specification Version (Identify): 1.4 00:07:46.701 Maximum Queue Entries: 2048 00:07:46.701 Contiguous Queues Required: Yes 00:07:46.701 Arbitration Mechanisms Supported 00:07:46.701 Weighted Round Robin: Not Supported 00:07:46.701 Vendor Specific: Not Supported 00:07:46.701 Reset Timeout: 7500 ms 00:07:46.701 Doorbell Stride: 4 bytes 00:07:46.701 NVM Subsystem Reset: Not Supported 00:07:46.701 Command Sets Supported 00:07:46.701 NVM Command Set: Supported 00:07:46.701 Boot Partition: Not Supported 00:07:46.701 Memory Page Size Minimum: 4096 bytes 00:07:46.701 Memory Page Size Maximum: 65536 bytes 00:07:46.701 Persistent Memory Region: Not Supported 00:07:46.701 Optional Asynchronous Events Supported 00:07:46.701 Namespace Attribute Notices: Supported 00:07:46.701 Firmware Activation Notices: Not Supported 00:07:46.701 ANA Change Notices: Not Supported 00:07:46.701 PLE Aggregate Log Change Notices: Not Supported 00:07:46.701 LBA Status Info Alert Notices: Not Supported 00:07:46.701 EGE Aggregate Log Change Notices: Not Supported 00:07:46.701 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.701 Zone Descriptor Change Notices: Not Supported 00:07:46.701 Discovery Log Change Notices: Not Supported 00:07:46.701 Controller Attributes 00:07:46.701 128-bit Host Identifier: Not Supported 00:07:46.701 Non-Operational Permissive Mode: Not Supported 00:07:46.701 NVM Sets: Not Supported 00:07:46.701 Read Recovery Levels: Not Supported 00:07:46.701 Endurance Groups: Not Supported 00:07:46.701 Predictable Latency Mode: Not Supported 00:07:46.701 Traffic Based Keep ALive: Not Supported 00:07:46.701 Namespace Granularity: Not Supported 00:07:46.701 SQ Associations: Not Supported 00:07:46.701 UUID List: Not Supported 00:07:46.701 Multi-Domain Subsystem: Not Supported 00:07:46.701 Fixed Capacity Management: Not Supported 00:07:46.701 Variable Capacity Management: Not Supported 00:07:46.701 Delete Endurance Group: Not Supported 00:07:46.701 Delete NVM Set: Not Supported 00:07:46.701 Extended LBA Formats Supported: Supported 00:07:46.701 Flexible Data Placement Supported: Not Supported 00:07:46.701 00:07:46.701 Controller Memory Buffer Support 00:07:46.701 ================================ 00:07:46.701 Supported: No 00:07:46.701 00:07:46.701 Persistent Memory Region Support 00:07:46.701 ================================ 00:07:46.701 Supported: No 00:07:46.701 00:07:46.701 Admin Command Set Attributes 00:07:46.701 ============================ 00:07:46.701 Security Send/Receive: Not Supported 00:07:46.701 Format NVM: Supported 00:07:46.701 Firmware Activate/Download: Not Supported 00:07:46.701 Namespace Management: Supported 00:07:46.701 Device Self-Test: Not Supported 00:07:46.701 Directives: Supported 00:07:46.701 NVMe-MI: Not Supported 00:07:46.701 Virtualization Management: Not Supported 00:07:46.701 Doorbell Buffer Config: Supported 00:07:46.701 Get LBA Status Capability: Not Supported 00:07:46.701 Command & Feature Lockdown Capability: Not Supported 00:07:46.701 Abort Command Limit: 4 00:07:46.701 Async Event Request Limit: 4 00:07:46.701 Number of Firmware Slots: N/A 00:07:46.701 Firmware Slot 1 Read-Only: N/A 00:07:46.701 Firmware Activation Without Reset: N/A 00:07:46.701 Multiple Update Detection Support: N/A 00:07:46.701 Firmware Update Granularity: No Information Provided 00:07:46.701 Per-Namespace SMART Log: Yes 00:07:46.701 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.701 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:46.701 Command Effects Log Page: Supported 00:07:46.701 Get Log Page Extended Data: Supported 00:07:46.701 Telemetry Log Pages: Not Supported 00:07:46.701 Persistent Event Log Pages: Not Supported 00:07:46.701 Supported Log Pages Log Page: May Support 00:07:46.701 Commands Supported & Effects Log Page: Not Supported 00:07:46.701 Feature Identifiers & Effects Log Page:May Support 00:07:46.701 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.701 Data Area 4 for Telemetry Log: Not Supported 00:07:46.701 Error Log Page Entries Supported: 1 00:07:46.701 Keep Alive: Not Supported 00:07:46.701 00:07:46.701 NVM Command Set Attributes 00:07:46.701 ========================== 00:07:46.701 Submission Queue Entry Size 00:07:46.701 Max: 64 00:07:46.701 Min: 64 00:07:46.701 Completion Queue Entry Size 00:07:46.701 Max: 16 00:07:46.701 Min: 16 00:07:46.701 Number of Namespaces: 256 00:07:46.701 Compare Command: Supported 00:07:46.701 Write Uncorrectable Command: Not Supported 00:07:46.701 Dataset Management Command: Supported 00:07:46.701 Write Zeroes Command: Supported 00:07:46.701 Set Features Save Field: Supported 00:07:46.701 Reservations: Not Supported 00:07:46.701 Timestamp: Supported 00:07:46.701 Copy: Supported 00:07:46.701 Volatile Write Cache: Present 00:07:46.701 Atomic Write Unit (Normal): 1 00:07:46.701 Atomic Write Unit (PFail): 1 00:07:46.701 Atomic Compare & Write Unit: 1 00:07:46.701 Fused Compare & Write: Not Supported 00:07:46.701 Scatter-Gather List 00:07:46.701 SGL Command Set: Supported 00:07:46.701 SGL Keyed: Not Supported 00:07:46.701 SGL Bit Bucket Descriptor: Not Supported 00:07:46.701 SGL Metadata Pointer: Not Supported 00:07:46.701 Oversized SGL: Not Supported 00:07:46.701 SGL Metadata Address: Not Supported 00:07:46.701 SGL Offset: Not Supported 00:07:46.701 Transport SGL Data Block: Not Supported 00:07:46.701 Replay Protected Memory Block: Not Supported 00:07:46.701 00:07:46.701 Firmware Slot Information 00:07:46.701 ========================= 00:07:46.701 Active slot: 1 00:07:46.701 Slot 1 Firmware Revision: 1.0 00:07:46.701 00:07:46.701 00:07:46.701 Commands Supported and Effects 00:07:46.701 ============================== 00:07:46.701 Admin Commands 00:07:46.701 -------------- 00:07:46.701 Delete I/O Submission Queue (00h): Supported 00:07:46.701 Create I/O Submission Queue (01h): Supported 00:07:46.701 Get Log Page (02h): Supported 00:07:46.701 Delete I/O Completion Queue (04h): Supported 00:07:46.701 Create I/O Completion Queue (05h): Supported 00:07:46.701 Identify (06h): Supported 00:07:46.701 Abort (08h): Supported 00:07:46.701 Set Features (09h): Supported 00:07:46.701 Get Features (0Ah): Supported 00:07:46.701 Asynchronous Event Request (0Ch): Supported 00:07:46.701 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.701 Directive Send (19h): Supported 00:07:46.701 Directive Receive (1Ah): Supported 00:07:46.701 Virtualization Management (1Ch): Supported 00:07:46.701 Doorbell Buffer Config (7Ch): Supported 00:07:46.701 Format NVM (80h): Supported LBA-Change 00:07:46.701 I/O Commands 00:07:46.701 ------------ 00:07:46.701 Flush (00h): Supported LBA-Change 00:07:46.701 Write (01h): Supported LBA-Change 00:07:46.701 Read (02h): Supported 00:07:46.701 Compare (05h): Supported 00:07:46.701 Write Zeroes (08h): Supported LBA-Change 00:07:46.701 Dataset Management (09h): Supported LBA-Change 00:07:46.701 Unknown (0Ch): Supported 00:07:46.701 Unknown (12h): Supported 00:07:46.701 Copy (19h): Supported LBA-Change 00:07:46.702 Unknown (1Dh): Supported LBA-Change 00:07:46.702 00:07:46.702 Error Log 00:07:46.702 ========= 00:07:46.702 00:07:46.702 Arbitration 00:07:46.702 =========== 00:07:46.702 Arbitration Burst: no limit 00:07:46.702 00:07:46.702 Power Management 00:07:46.702 ================ 00:07:46.702 Number of Power States: 1 00:07:46.702 Current Power State: Power State #0 00:07:46.702 Power State #0: 00:07:46.702 Max Power: 25.00 W 00:07:46.702 Non-Operational State: Operational 00:07:46.702 Entry Latency: 16 microseconds 00:07:46.702 Exit Latency: 4 microseconds 00:07:46.702 Relative Read Throughput: 0 00:07:46.702 Relative Read Latency: 0 00:07:46.702 Relative Write Throughput: 0 00:07:46.702 Relative Write Latency: 0 00:07:46.702 Idle Power: Not Reported 00:07:46.702 Active Power: Not Reported 00:07:46.702 Non-Operational Permissive Mode: Not Supported 00:07:46.702 00:07:46.702 Health Information 00:07:46.702 ================== 00:07:46.702 Critical Warnings: 00:07:46.702 Available Spare Space: OK 00:07:46.702 Temperature: OK 00:07:46.702 Device Reliability: OK 00:07:46.702 Read Only: No 00:07:46.702 Volatile Memory Backup: OK 00:07:46.702 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.702 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.702 Available Spare: 0% 00:07:46.702 Available Spare Threshold: 0% 00:07:46.702 Life Percentage Used: 0% 00:07:46.702 Data Units Read: 2086 00:07:46.702 Data Units Written: 1873 00:07:46.702 Host Read Commands: 108938 00:07:46.702 Host Write Commands: 107207 00:07:46.702 Controller Busy Time: 0 minutes 00:07:46.702 Power Cycles: 0 00:07:46.702 Power On Hours: 0 hours 00:07:46.702 Unsafe Shutdowns: 0 00:07:46.702 Unrecoverable Media Errors: 0 00:07:46.702 Lifetime Error Log Entries: 0 00:07:46.702 Warning Temperature Time: 0 minutes 00:07:46.702 Critical Temperature Time: 0 minutes 00:07:46.702 00:07:46.702 Number of Queues 00:07:46.702 ================ 00:07:46.702 Number of I/O Submission Queues: 64 00:07:46.702 Number of I/O Completion Queues: 64 00:07:46.702 00:07:46.702 ZNS Specific Controller Data 00:07:46.702 ============================ 00:07:46.702 Zone Append Size Limit: 0 00:07:46.702 00:07:46.702 00:07:46.702 Active Namespaces 00:07:46.702 ================= 00:07:46.702 Namespace ID:1 00:07:46.702 Error Recovery Timeout: Unlimited 00:07:46.702 Command Set Identifier: NVM (00h) 00:07:46.702 Deallocate: Supported 00:07:46.702 Deallocated/Unwritten Error: Supported 00:07:46.702 Deallocated Read Value: All 0x00 00:07:46.702 Deallocate in Write Zeroes: Not Supported 00:07:46.702 Deallocated Guard Field: 0xFFFF 00:07:46.702 Flush: Supported 00:07:46.702 Reservation: Not Supported 00:07:46.702 Namespace Sharing Capabilities: Private 00:07:46.702 Size (in LBAs): 1048576 (4GiB) 00:07:46.702 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.702 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.702 Thin Provisioning: Not Supported 00:07:46.702 Per-NS Atomic Units: No 00:07:46.702 Maximum Single Source Range Length: 128 00:07:46.702 Maximum Copy Length: 128 00:07:46.702 Maximum Source Range Count: 128 00:07:46.702 NGUID/EUI64 Never Reused: No 00:07:46.702 Namespace Write Protected: No 00:07:46.702 Number of LBA Formats: 8 00:07:46.702 Current LBA Format: LBA Format #04 00:07:46.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.702 00:07:46.702 NVM Specific Namespace Data 00:07:46.702 =========================== 00:07:46.702 Logical Block Storage Tag Mask: 0 00:07:46.702 Protection Information Capabilities: 00:07:46.702 16b Guard Protection Information Storage Tag Support: No 00:07:46.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.702 Storage Tag Check Read Support: No 00:07:46.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Namespace ID:2 00:07:46.702 Error Recovery Timeout: Unlimited 00:07:46.702 Command Set Identifier: NVM (00h) 00:07:46.702 Deallocate: Supported 00:07:46.702 Deallocated/Unwritten Error: Supported 00:07:46.702 Deallocated Read Value: All 0x00 00:07:46.702 Deallocate in Write Zeroes: Not Supported 00:07:46.702 Deallocated Guard Field: 0xFFFF 00:07:46.702 Flush: Supported 00:07:46.702 Reservation: Not Supported 00:07:46.702 Namespace Sharing Capabilities: Private 00:07:46.702 Size (in LBAs): 1048576 (4GiB) 00:07:46.702 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.702 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.702 Thin Provisioning: Not Supported 00:07:46.702 Per-NS Atomic Units: No 00:07:46.702 Maximum Single Source Range Length: 128 00:07:46.702 Maximum Copy Length: 128 00:07:46.702 Maximum Source Range Count: 128 00:07:46.702 NGUID/EUI64 Never Reused: No 00:07:46.702 Namespace Write Protected: No 00:07:46.702 Number of LBA Formats: 8 00:07:46.702 Current LBA Format: LBA Format #04 00:07:46.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.702 00:07:46.702 NVM Specific Namespace Data 00:07:46.702 =========================== 00:07:46.702 Logical Block Storage Tag Mask: 0 00:07:46.702 Protection Information Capabilities: 00:07:46.702 16b Guard Protection Information Storage Tag Support: No 00:07:46.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.702 Storage Tag Check Read Support: No 00:07:46.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.702 Namespace ID:3 00:07:46.702 Error Recovery Timeout: Unlimited 00:07:46.702 Command Set Identifier: NVM (00h) 00:07:46.702 Deallocate: Supported 00:07:46.702 Deallocated/Unwritten Error: Supported 00:07:46.702 Deallocated Read Value: All 0x00 00:07:46.702 Deallocate in Write Zeroes: Not Supported 00:07:46.702 Deallocated Guard Field: 0xFFFF 00:07:46.702 Flush: Supported 00:07:46.702 Reservation: Not Supported 00:07:46.702 Namespace Sharing Capabilities: Private 00:07:46.702 Size (in LBAs): 1048576 (4GiB) 00:07:46.702 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.702 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.702 Thin Provisioning: Not Supported 00:07:46.702 Per-NS Atomic Units: No 00:07:46.702 Maximum Single Source Range Length: 128 00:07:46.702 Maximum Copy Length: 128 00:07:46.702 Maximum Source Range Count: 128 00:07:46.702 NGUID/EUI64 Never Reused: No 00:07:46.702 Namespace Write Protected: No 00:07:46.702 Number of LBA Formats: 8 00:07:46.702 Current LBA Format: LBA Format #04 00:07:46.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.702 00:07:46.702 NVM Specific Namespace Data 00:07:46.702 =========================== 00:07:46.703 Logical Block Storage Tag Mask: 0 00:07:46.703 Protection Information Capabilities: 00:07:46.703 16b Guard Protection Information Storage Tag Support: No 00:07:46.703 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.703 Storage Tag Check Read Support: No 00:07:46.703 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.703 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.703 18:21:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:46.703 ===================================================== 00:07:46.703 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.703 ===================================================== 00:07:46.703 Controller Capabilities/Features 00:07:46.703 ================================ 00:07:46.703 Vendor ID: 1b36 00:07:46.703 Subsystem Vendor ID: 1af4 00:07:46.703 Serial Number: 12343 00:07:46.703 Model Number: QEMU NVMe Ctrl 00:07:46.703 Firmware Version: 8.0.0 00:07:46.703 Recommended Arb Burst: 6 00:07:46.703 IEEE OUI Identifier: 00 54 52 00:07:46.703 Multi-path I/O 00:07:46.703 May have multiple subsystem ports: No 00:07:46.703 May have multiple controllers: Yes 00:07:46.703 Associated with SR-IOV VF: No 00:07:46.703 Max Data Transfer Size: 524288 00:07:46.703 Max Number of Namespaces: 256 00:07:46.703 Max Number of I/O Queues: 64 00:07:46.703 NVMe Specification Version (VS): 1.4 00:07:46.703 NVMe Specification Version (Identify): 1.4 00:07:46.703 Maximum Queue Entries: 2048 00:07:46.703 Contiguous Queues Required: Yes 00:07:46.703 Arbitration Mechanisms Supported 00:07:46.703 Weighted Round Robin: Not Supported 00:07:46.703 Vendor Specific: Not Supported 00:07:46.703 Reset Timeout: 7500 ms 00:07:46.703 Doorbell Stride: 4 bytes 00:07:46.703 NVM Subsystem Reset: Not Supported 00:07:46.703 Command Sets Supported 00:07:46.703 NVM Command Set: Supported 00:07:46.703 Boot Partition: Not Supported 00:07:46.703 Memory Page Size Minimum: 4096 bytes 00:07:46.703 Memory Page Size Maximum: 65536 bytes 00:07:46.703 Persistent Memory Region: Not Supported 00:07:46.703 Optional Asynchronous Events Supported 00:07:46.703 Namespace Attribute Notices: Supported 00:07:46.703 Firmware Activation Notices: Not Supported 00:07:46.703 ANA Change Notices: Not Supported 00:07:46.703 PLE Aggregate Log Change Notices: Not Supported 00:07:46.703 LBA Status Info Alert Notices: Not Supported 00:07:46.703 EGE Aggregate Log Change Notices: Not Supported 00:07:46.703 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.703 Zone Descriptor Change Notices: Not Supported 00:07:46.703 Discovery Log Change Notices: Not Supported 00:07:46.703 Controller Attributes 00:07:46.703 128-bit Host Identifier: Not Supported 00:07:46.703 Non-Operational Permissive Mode: Not Supported 00:07:46.703 NVM Sets: Not Supported 00:07:46.703 Read Recovery Levels: Not Supported 00:07:46.703 Endurance Groups: Supported 00:07:46.703 Predictable Latency Mode: Not Supported 00:07:46.703 Traffic Based Keep ALive: Not Supported 00:07:46.703 Namespace Granularity: Not Supported 00:07:46.703 SQ Associations: Not Supported 00:07:46.703 UUID List: Not Supported 00:07:46.703 Multi-Domain Subsystem: Not Supported 00:07:46.703 Fixed Capacity Management: Not Supported 00:07:46.703 Variable Capacity Management: Not Supported 00:07:46.703 Delete Endurance Group: Not Supported 00:07:46.703 Delete NVM Set: Not Supported 00:07:46.703 Extended LBA Formats Supported: Supported 00:07:46.703 Flexible Data Placement Supported: Supported 00:07:46.703 00:07:46.703 Controller Memory Buffer Support 00:07:46.703 ================================ 00:07:46.703 Supported: No 00:07:46.703 00:07:46.703 Persistent Memory Region Support 00:07:46.703 ================================ 00:07:46.703 Supported: No 00:07:46.703 00:07:46.703 Admin Command Set Attributes 00:07:46.703 ============================ 00:07:46.703 Security Send/Receive: Not Supported 00:07:46.703 Format NVM: Supported 00:07:46.703 Firmware Activate/Download: Not Supported 00:07:46.703 Namespace Management: Supported 00:07:46.703 Device Self-Test: Not Supported 00:07:46.703 Directives: Supported 00:07:46.703 NVMe-MI: Not Supported 00:07:46.703 Virtualization Management: Not Supported 00:07:46.703 Doorbell Buffer Config: Supported 00:07:46.703 Get LBA Status Capability: Not Supported 00:07:46.703 Command & Feature Lockdown Capability: Not Supported 00:07:46.703 Abort Command Limit: 4 00:07:46.703 Async Event Request Limit: 4 00:07:46.703 Number of Firmware Slots: N/A 00:07:46.703 Firmware Slot 1 Read-Only: N/A 00:07:46.703 Firmware Activation Without Reset: N/A 00:07:46.703 Multiple Update Detection Support: N/A 00:07:46.703 Firmware Update Granularity: No Information Provided 00:07:46.703 Per-Namespace SMART Log: Yes 00:07:46.703 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.703 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:46.703 Command Effects Log Page: Supported 00:07:46.703 Get Log Page Extended Data: Supported 00:07:46.703 Telemetry Log Pages: Not Supported 00:07:46.703 Persistent Event Log Pages: Not Supported 00:07:46.703 Supported Log Pages Log Page: May Support 00:07:46.703 Commands Supported & Effects Log Page: Not Supported 00:07:46.703 Feature Identifiers & Effects Log Page:May Support 00:07:46.703 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.703 Data Area 4 for Telemetry Log: Not Supported 00:07:46.703 Error Log Page Entries Supported: 1 00:07:46.703 Keep Alive: Not Supported 00:07:46.703 00:07:46.703 NVM Command Set Attributes 00:07:46.703 ========================== 00:07:46.703 Submission Queue Entry Size 00:07:46.703 Max: 64 00:07:46.703 Min: 64 00:07:46.703 Completion Queue Entry Size 00:07:46.703 Max: 16 00:07:46.703 Min: 16 00:07:46.703 Number of Namespaces: 256 00:07:46.703 Compare Command: Supported 00:07:46.703 Write Uncorrectable Command: Not Supported 00:07:46.703 Dataset Management Command: Supported 00:07:46.703 Write Zeroes Command: Supported 00:07:46.703 Set Features Save Field: Supported 00:07:46.703 Reservations: Not Supported 00:07:46.703 Timestamp: Supported 00:07:46.703 Copy: Supported 00:07:46.703 Volatile Write Cache: Present 00:07:46.703 Atomic Write Unit (Normal): 1 00:07:46.703 Atomic Write Unit (PFail): 1 00:07:46.703 Atomic Compare & Write Unit: 1 00:07:46.703 Fused Compare & Write: Not Supported 00:07:46.703 Scatter-Gather List 00:07:46.703 SGL Command Set: Supported 00:07:46.703 SGL Keyed: Not Supported 00:07:46.703 SGL Bit Bucket Descriptor: Not Supported 00:07:46.703 SGL Metadata Pointer: Not Supported 00:07:46.703 Oversized SGL: Not Supported 00:07:46.703 SGL Metadata Address: Not Supported 00:07:46.703 SGL Offset: Not Supported 00:07:46.703 Transport SGL Data Block: Not Supported 00:07:46.703 Replay Protected Memory Block: Not Supported 00:07:46.703 00:07:46.703 Firmware Slot Information 00:07:46.703 ========================= 00:07:46.703 Active slot: 1 00:07:46.703 Slot 1 Firmware Revision: 1.0 00:07:46.703 00:07:46.703 00:07:46.703 Commands Supported and Effects 00:07:46.703 ============================== 00:07:46.703 Admin Commands 00:07:46.703 -------------- 00:07:46.703 Delete I/O Submission Queue (00h): Supported 00:07:46.703 Create I/O Submission Queue (01h): Supported 00:07:46.703 Get Log Page (02h): Supported 00:07:46.703 Delete I/O Completion Queue (04h): Supported 00:07:46.703 Create I/O Completion Queue (05h): Supported 00:07:46.703 Identify (06h): Supported 00:07:46.703 Abort (08h): Supported 00:07:46.703 Set Features (09h): Supported 00:07:46.703 Get Features (0Ah): Supported 00:07:46.703 Asynchronous Event Request (0Ch): Supported 00:07:46.703 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.703 Directive Send (19h): Supported 00:07:46.703 Directive Receive (1Ah): Supported 00:07:46.703 Virtualization Management (1Ch): Supported 00:07:46.703 Doorbell Buffer Config (7Ch): Supported 00:07:46.703 Format NVM (80h): Supported LBA-Change 00:07:46.703 I/O Commands 00:07:46.703 ------------ 00:07:46.703 Flush (00h): Supported LBA-Change 00:07:46.703 Write (01h): Supported LBA-Change 00:07:46.703 Read (02h): Supported 00:07:46.703 Compare (05h): Supported 00:07:46.703 Write Zeroes (08h): Supported LBA-Change 00:07:46.703 Dataset Management (09h): Supported LBA-Change 00:07:46.703 Unknown (0Ch): Supported 00:07:46.703 Unknown (12h): Supported 00:07:46.703 Copy (19h): Supported LBA-Change 00:07:46.703 Unknown (1Dh): Supported LBA-Change 00:07:46.704 00:07:46.704 Error Log 00:07:46.704 ========= 00:07:46.704 00:07:46.704 Arbitration 00:07:46.704 =========== 00:07:46.704 Arbitration Burst: no limit 00:07:46.704 00:07:46.704 Power Management 00:07:46.704 ================ 00:07:46.704 Number of Power States: 1 00:07:46.704 Current Power State: Power State #0 00:07:46.704 Power State #0: 00:07:46.704 Max Power: 25.00 W 00:07:46.704 Non-Operational State: Operational 00:07:46.704 Entry Latency: 16 microseconds 00:07:46.704 Exit Latency: 4 microseconds 00:07:46.704 Relative Read Throughput: 0 00:07:46.704 Relative Read Latency: 0 00:07:46.704 Relative Write Throughput: 0 00:07:46.704 Relative Write Latency: 0 00:07:46.704 Idle Power: Not Reported 00:07:46.704 Active Power: Not Reported 00:07:46.704 Non-Operational Permissive Mode: Not Supported 00:07:46.704 00:07:46.704 Health Information 00:07:46.704 ================== 00:07:46.704 Critical Warnings: 00:07:46.704 Available Spare Space: OK 00:07:46.704 Temperature: OK 00:07:46.704 Device Reliability: OK 00:07:46.704 Read Only: No 00:07:46.704 Volatile Memory Backup: OK 00:07:46.704 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.704 Available Spare: 0% 00:07:46.704 Available Spare Threshold: 0% 00:07:46.704 Life Percentage Used: 0% 00:07:46.704 Data Units Read: 813 00:07:46.704 Data Units Written: 742 00:07:46.704 Host Read Commands: 37332 00:07:46.704 Host Write Commands: 36757 00:07:46.704 Controller Busy Time: 0 minutes 00:07:46.704 Power Cycles: 0 00:07:46.704 Power On Hours: 0 hours 00:07:46.704 Unsafe Shutdowns: 0 00:07:46.704 Unrecoverable Media Errors: 0 00:07:46.704 Lifetime Error Log Entries: 0 00:07:46.704 Warning Temperature Time: 0 minutes 00:07:46.704 Critical Temperature Time: 0 minutes 00:07:46.704 00:07:46.704 Number of Queues 00:07:46.704 ================ 00:07:46.704 Number of I/O Submission Queues: 64 00:07:46.704 Number of I/O Completion Queues: 64 00:07:46.704 00:07:46.704 ZNS Specific Controller Data 00:07:46.704 ============================ 00:07:46.704 Zone Append Size Limit: 0 00:07:46.704 00:07:46.704 00:07:46.704 Active Namespaces 00:07:46.704 ================= 00:07:46.704 Namespace ID:1 00:07:46.704 Error Recovery Timeout: Unlimited 00:07:46.704 Command Set Identifier: NVM (00h) 00:07:46.704 Deallocate: Supported 00:07:46.704 Deallocated/Unwritten Error: Supported 00:07:46.704 Deallocated Read Value: All 0x00 00:07:46.704 Deallocate in Write Zeroes: Not Supported 00:07:46.704 Deallocated Guard Field: 0xFFFF 00:07:46.704 Flush: Supported 00:07:46.704 Reservation: Not Supported 00:07:46.704 Namespace Sharing Capabilities: Multiple Controllers 00:07:46.704 Size (in LBAs): 262144 (1GiB) 00:07:46.704 Capacity (in LBAs): 262144 (1GiB) 00:07:46.704 Utilization (in LBAs): 262144 (1GiB) 00:07:46.704 Thin Provisioning: Not Supported 00:07:46.704 Per-NS Atomic Units: No 00:07:46.704 Maximum Single Source Range Length: 128 00:07:46.704 Maximum Copy Length: 128 00:07:46.704 Maximum Source Range Count: 128 00:07:46.704 NGUID/EUI64 Never Reused: No 00:07:46.704 Namespace Write Protected: No 00:07:46.704 Endurance group ID: 1 00:07:46.704 Number of LBA Formats: 8 00:07:46.704 Current LBA Format: LBA Format #04 00:07:46.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.704 00:07:46.704 Get Feature FDP: 00:07:46.704 ================ 00:07:46.704 Enabled: Yes 00:07:46.704 FDP configuration index: 0 00:07:46.704 00:07:46.704 FDP configurations log page 00:07:46.704 =========================== 00:07:46.704 Number of FDP configurations: 1 00:07:46.704 Version: 0 00:07:46.704 Size: 112 00:07:46.704 FDP Configuration Descriptor: 0 00:07:46.704 Descriptor Size: 96 00:07:46.704 Reclaim Group Identifier format: 2 00:07:46.704 FDP Volatile Write Cache: Not Present 00:07:46.704 FDP Configuration: Valid 00:07:46.704 Vendor Specific Size: 0 00:07:46.704 Number of Reclaim Groups: 2 00:07:46.704 Number of Recalim Unit Handles: 8 00:07:46.704 Max Placement Identifiers: 128 00:07:46.704 Number of Namespaces Suppprted: 256 00:07:46.704 Reclaim unit Nominal Size: 6000000 bytes 00:07:46.704 Estimated Reclaim Unit Time Limit: Not Reported 00:07:46.704 RUH Desc #000: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #001: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #002: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #003: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #004: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #005: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #006: RUH Type: Initially Isolated 00:07:46.704 RUH Desc #007: RUH Type: Initially Isolated 00:07:46.704 00:07:46.704 FDP reclaim unit handle usage log page 00:07:46.963 ====================================== 00:07:46.963 Number of Reclaim Unit Handles: 8 00:07:46.963 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:46.963 RUH Usage Desc #001: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #002: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #003: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #004: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #005: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #006: RUH Attributes: Unused 00:07:46.963 RUH Usage Desc #007: RUH Attributes: Unused 00:07:46.963 00:07:46.963 FDP statistics log page 00:07:46.963 ======================= 00:07:46.963 Host bytes with metadata written: 474259456 00:07:46.963 Media bytes with metadata written: 474304512 00:07:46.963 Media bytes erased: 0 00:07:46.963 00:07:46.963 FDP events log page 00:07:46.963 =================== 00:07:46.963 Number of FDP events: 0 00:07:46.963 00:07:46.963 NVM Specific Namespace Data 00:07:46.963 =========================== 00:07:46.963 Logical Block Storage Tag Mask: 0 00:07:46.963 Protection Information Capabilities: 00:07:46.963 16b Guard Protection Information Storage Tag Support: No 00:07:46.963 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.963 Storage Tag Check Read Support: No 00:07:46.963 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.963 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.963 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.963 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.964 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.964 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.964 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.964 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.964 00:07:46.964 real 0m1.108s 00:07:46.964 user 0m0.417s 00:07:46.964 sys 0m0.483s 00:07:46.964 18:21:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.964 ************************************ 00:07:46.964 18:21:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:46.964 END TEST nvme_identify 00:07:46.964 ************************************ 00:07:46.964 18:21:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:46.964 18:21:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.964 18:21:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.964 18:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.964 ************************************ 00:07:46.964 START TEST nvme_perf 00:07:46.964 ************************************ 00:07:46.964 18:21:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:46.964 18:21:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:48.342 Initializing NVMe Controllers 00:07:48.342 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.342 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.342 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.342 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:48.342 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:48.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:48.342 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:48.342 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:48.342 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:48.342 Initialization complete. Launching workers. 00:07:48.342 ======================================================== 00:07:48.342 Latency(us) 00:07:48.342 Device Information : IOPS MiB/s Average min max 00:07:48.342 PCIE (0000:00:11.0) NSID 1 from core 0: 9125.22 106.94 14040.47 9905.49 28776.99 00:07:48.342 PCIE (0000:00:13.0) NSID 1 from core 0: 9125.22 106.94 14037.56 8616.85 28688.32 00:07:48.342 PCIE (0000:00:10.0) NSID 1 from core 0: 9125.22 106.94 14024.87 7442.34 28830.36 00:07:48.342 PCIE (0000:00:12.0) NSID 1 from core 0: 9125.22 106.94 14014.19 6529.79 28753.20 00:07:48.342 PCIE (0000:00:12.0) NSID 2 from core 0: 9125.22 106.94 14002.55 5172.26 29379.82 00:07:48.342 PCIE (0000:00:12.0) NSID 3 from core 0: 9125.22 106.94 13990.51 4347.43 29526.09 00:07:48.342 ======================================================== 00:07:48.342 Total : 54751.30 641.62 14018.36 4347.43 29526.09 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10536.172us 00:07:48.342 10.00000% : 11594.831us 00:07:48.342 25.00000% : 12703.902us 00:07:48.342 50.00000% : 14014.622us 00:07:48.342 75.00000% : 15325.342us 00:07:48.342 90.00000% : 16333.588us 00:07:48.342 95.00000% : 16837.711us 00:07:48.342 98.00000% : 17644.308us 00:07:48.342 99.00000% : 19761.625us 00:07:48.342 99.50000% : 27827.594us 00:07:48.342 99.90000% : 28634.191us 00:07:48.342 99.99000% : 28835.840us 00:07:48.342 99.99900% : 28835.840us 00:07:48.342 99.99990% : 28835.840us 00:07:48.342 99.99999% : 28835.840us 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10536.172us 00:07:48.342 10.00000% : 11695.655us 00:07:48.342 25.00000% : 12754.314us 00:07:48.342 50.00000% : 13913.797us 00:07:48.342 75.00000% : 15224.517us 00:07:48.342 90.00000% : 16434.412us 00:07:48.342 95.00000% : 16938.535us 00:07:48.342 98.00000% : 17644.308us 00:07:48.342 99.00000% : 20467.397us 00:07:48.342 99.50000% : 27827.594us 00:07:48.342 99.90000% : 28634.191us 00:07:48.342 99.99000% : 28835.840us 00:07:48.342 99.99900% : 28835.840us 00:07:48.342 99.99990% : 28835.840us 00:07:48.342 99.99999% : 28835.840us 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10586.585us 00:07:48.342 10.00000% : 11645.243us 00:07:48.342 25.00000% : 12653.489us 00:07:48.342 50.00000% : 14014.622us 00:07:48.342 75.00000% : 15224.517us 00:07:48.342 90.00000% : 16232.763us 00:07:48.342 95.00000% : 16837.711us 00:07:48.342 98.00000% : 18047.606us 00:07:48.342 99.00000% : 20467.397us 00:07:48.342 99.50000% : 28230.892us 00:07:48.342 99.90000% : 28835.840us 00:07:48.342 99.99000% : 28835.840us 00:07:48.342 99.99900% : 28835.840us 00:07:48.342 99.99990% : 28835.840us 00:07:48.342 99.99999% : 28835.840us 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10485.760us 00:07:48.342 10.00000% : 11695.655us 00:07:48.342 25.00000% : 12603.077us 00:07:48.342 50.00000% : 14014.622us 00:07:48.342 75.00000% : 15224.517us 00:07:48.342 90.00000% : 16232.763us 00:07:48.342 95.00000% : 16938.535us 00:07:48.342 98.00000% : 18450.905us 00:07:48.342 99.00000% : 20467.397us 00:07:48.342 99.50000% : 28230.892us 00:07:48.342 99.90000% : 28634.191us 00:07:48.342 99.99000% : 28835.840us 00:07:48.342 99.99900% : 28835.840us 00:07:48.342 99.99990% : 28835.840us 00:07:48.342 99.99999% : 28835.840us 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10284.111us 00:07:48.342 10.00000% : 11544.418us 00:07:48.342 25.00000% : 12603.077us 00:07:48.342 50.00000% : 13913.797us 00:07:48.342 75.00000% : 15426.166us 00:07:48.342 90.00000% : 16333.588us 00:07:48.342 95.00000% : 16837.711us 00:07:48.342 98.00000% : 18350.080us 00:07:48.342 99.00000% : 21173.169us 00:07:48.342 99.50000% : 28835.840us 00:07:48.342 99.90000% : 29440.788us 00:07:48.342 99.99000% : 29440.788us 00:07:48.342 99.99900% : 29440.788us 00:07:48.342 99.99990% : 29440.788us 00:07:48.342 99.99999% : 29440.788us 00:07:48.342 00:07:48.342 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.342 ================================================================================= 00:07:48.342 1.00000% : 10334.523us 00:07:48.342 10.00000% : 11494.006us 00:07:48.342 25.00000% : 12703.902us 00:07:48.342 50.00000% : 13913.797us 00:07:48.342 75.00000% : 15325.342us 00:07:48.342 90.00000% : 16434.412us 00:07:48.342 95.00000% : 17039.360us 00:07:48.342 98.00000% : 17946.782us 00:07:48.342 99.00000% : 21072.345us 00:07:48.342 99.50000% : 28835.840us 00:07:48.342 99.90000% : 29440.788us 00:07:48.342 99.99000% : 29642.437us 00:07:48.342 99.99900% : 29642.437us 00:07:48.342 99.99990% : 29642.437us 00:07:48.342 99.99999% : 29642.437us 00:07:48.342 00:07:48.342 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.342 ============================================================================== 00:07:48.342 Range in us Cumulative IO count 00:07:48.343 9880.812 - 9931.225: 0.0219% ( 2) 00:07:48.343 9931.225 - 9981.637: 0.0328% ( 1) 00:07:48.343 9981.637 - 10032.049: 0.0656% ( 3) 00:07:48.343 10032.049 - 10082.462: 0.0874% ( 2) 00:07:48.343 10082.462 - 10132.874: 0.1093% ( 2) 00:07:48.343 10132.874 - 10183.286: 0.1420% ( 3) 00:07:48.343 10183.286 - 10233.698: 0.1858% ( 4) 00:07:48.343 10233.698 - 10284.111: 0.2513% ( 6) 00:07:48.343 10284.111 - 10334.523: 0.3606% ( 10) 00:07:48.343 10334.523 - 10384.935: 0.5135% ( 14) 00:07:48.343 10384.935 - 10435.348: 0.7321% ( 20) 00:07:48.343 10435.348 - 10485.760: 0.9725% ( 22) 00:07:48.343 10485.760 - 10536.172: 1.1910% ( 20) 00:07:48.343 10536.172 - 10586.585: 1.3767% ( 17) 00:07:48.343 10586.585 - 10636.997: 1.5844% ( 19) 00:07:48.343 10636.997 - 10687.409: 1.8357% ( 23) 00:07:48.343 10687.409 - 10737.822: 2.0979% ( 24) 00:07:48.343 10737.822 - 10788.234: 2.3820% ( 26) 00:07:48.343 10788.234 - 10838.646: 2.6661% ( 26) 00:07:48.343 10838.646 - 10889.058: 3.0267% ( 33) 00:07:48.343 10889.058 - 10939.471: 3.4309% ( 37) 00:07:48.343 10939.471 - 10989.883: 3.8462% ( 38) 00:07:48.343 10989.883 - 11040.295: 4.2941% ( 41) 00:07:48.343 11040.295 - 11090.708: 4.6875% ( 36) 00:07:48.343 11090.708 - 11141.120: 5.2010% ( 47) 00:07:48.343 11141.120 - 11191.532: 5.6927% ( 45) 00:07:48.343 11191.532 - 11241.945: 6.1735% ( 44) 00:07:48.343 11241.945 - 11292.357: 6.6871% ( 47) 00:07:48.343 11292.357 - 11342.769: 7.1897% ( 46) 00:07:48.343 11342.769 - 11393.182: 7.6705% ( 44) 00:07:48.343 11393.182 - 11443.594: 8.1840% ( 47) 00:07:48.343 11443.594 - 11494.006: 8.8615% ( 62) 00:07:48.343 11494.006 - 11544.418: 9.5389% ( 62) 00:07:48.343 11544.418 - 11594.831: 10.1617% ( 57) 00:07:48.343 11594.831 - 11645.243: 10.9266% ( 70) 00:07:48.343 11645.243 - 11695.655: 11.7024% ( 71) 00:07:48.343 11695.655 - 11746.068: 12.4454% ( 68) 00:07:48.343 11746.068 - 11796.480: 13.2758% ( 76) 00:07:48.343 11796.480 - 11846.892: 14.0188% ( 68) 00:07:48.343 11846.892 - 11897.305: 14.7072% ( 63) 00:07:48.343 11897.305 - 11947.717: 15.3518% ( 59) 00:07:48.343 11947.717 - 11998.129: 16.0402% ( 63) 00:07:48.343 11998.129 - 12048.542: 16.6958% ( 60) 00:07:48.343 12048.542 - 12098.954: 17.2640% ( 52) 00:07:48.343 12098.954 - 12149.366: 17.8977% ( 58) 00:07:48.343 12149.366 - 12199.778: 18.5205% ( 57) 00:07:48.343 12199.778 - 12250.191: 19.1761% ( 60) 00:07:48.343 12250.191 - 12300.603: 19.8645% ( 63) 00:07:48.343 12300.603 - 12351.015: 20.5092% ( 59) 00:07:48.343 12351.015 - 12401.428: 21.2194% ( 65) 00:07:48.343 12401.428 - 12451.840: 21.8094% ( 54) 00:07:48.343 12451.840 - 12502.252: 22.5524% ( 68) 00:07:48.343 12502.252 - 12552.665: 23.2736% ( 66) 00:07:48.343 12552.665 - 12603.077: 24.0385% ( 70) 00:07:48.343 12603.077 - 12653.489: 24.8689% ( 76) 00:07:48.343 12653.489 - 12703.902: 25.7102% ( 77) 00:07:48.343 12703.902 - 12754.314: 26.3767% ( 61) 00:07:48.343 12754.314 - 12804.726: 27.1635% ( 72) 00:07:48.343 12804.726 - 12855.138: 28.0485% ( 81) 00:07:48.343 12855.138 - 12905.551: 29.0975% ( 96) 00:07:48.343 12905.551 - 13006.375: 31.1189% ( 185) 00:07:48.343 13006.375 - 13107.200: 33.1075% ( 182) 00:07:48.343 13107.200 - 13208.025: 35.2819% ( 199) 00:07:48.343 13208.025 - 13308.849: 37.5656% ( 209) 00:07:48.343 13308.849 - 13409.674: 40.0131% ( 224) 00:07:48.343 13409.674 - 13510.498: 42.1329% ( 194) 00:07:48.343 13510.498 - 13611.323: 44.0559% ( 176) 00:07:48.343 13611.323 - 13712.148: 45.9572% ( 174) 00:07:48.343 13712.148 - 13812.972: 47.7273% ( 162) 00:07:48.343 13812.972 - 13913.797: 49.4318% ( 156) 00:07:48.343 13913.797 - 14014.622: 51.4423% ( 184) 00:07:48.343 14014.622 - 14115.446: 53.5621% ( 194) 00:07:48.343 14115.446 - 14216.271: 55.7583% ( 201) 00:07:48.343 14216.271 - 14317.095: 57.8234% ( 189) 00:07:48.343 14317.095 - 14417.920: 59.9104% ( 191) 00:07:48.343 14417.920 - 14518.745: 61.9646% ( 188) 00:07:48.343 14518.745 - 14619.569: 63.9314% ( 180) 00:07:48.343 14619.569 - 14720.394: 66.0293% ( 192) 00:07:48.343 14720.394 - 14821.218: 67.9414% ( 175) 00:07:48.343 14821.218 - 14922.043: 69.7225% ( 163) 00:07:48.343 14922.043 - 15022.868: 71.3505% ( 149) 00:07:48.343 15022.868 - 15123.692: 72.7710% ( 130) 00:07:48.343 15123.692 - 15224.517: 74.1914% ( 130) 00:07:48.343 15224.517 - 15325.342: 75.6884% ( 137) 00:07:48.343 15325.342 - 15426.166: 77.1525% ( 134) 00:07:48.343 15426.166 - 15526.991: 78.6604% ( 138) 00:07:48.343 15526.991 - 15627.815: 80.1027% ( 132) 00:07:48.343 15627.815 - 15728.640: 81.5778% ( 135) 00:07:48.343 15728.640 - 15829.465: 83.1184% ( 141) 00:07:48.343 15829.465 - 15930.289: 84.6482% ( 140) 00:07:48.343 15930.289 - 16031.114: 86.0140% ( 125) 00:07:48.343 16031.114 - 16131.938: 87.4563% ( 132) 00:07:48.343 16131.938 - 16232.763: 88.8549% ( 128) 00:07:48.343 16232.763 - 16333.588: 90.1879% ( 122) 00:07:48.343 16333.588 - 16434.412: 91.5756% ( 127) 00:07:48.343 16434.412 - 16535.237: 92.7338% ( 106) 00:07:48.343 16535.237 - 16636.062: 93.6517% ( 84) 00:07:48.343 16636.062 - 16736.886: 94.4493% ( 73) 00:07:48.343 16736.886 - 16837.711: 95.1595% ( 65) 00:07:48.343 16837.711 - 16938.535: 95.7605% ( 55) 00:07:48.343 16938.535 - 17039.360: 96.2959% ( 49) 00:07:48.343 17039.360 - 17140.185: 96.7002% ( 37) 00:07:48.343 17140.185 - 17241.009: 97.1263% ( 39) 00:07:48.343 17241.009 - 17341.834: 97.4213% ( 27) 00:07:48.343 17341.834 - 17442.658: 97.6836% ( 24) 00:07:48.343 17442.658 - 17543.483: 97.8693% ( 17) 00:07:48.343 17543.483 - 17644.308: 98.0551% ( 17) 00:07:48.343 17644.308 - 17745.132: 98.2299% ( 16) 00:07:48.343 17745.132 - 17845.957: 98.3938% ( 15) 00:07:48.343 17845.957 - 17946.782: 98.4703% ( 7) 00:07:48.343 17946.782 - 18047.606: 98.5358% ( 6) 00:07:48.343 18047.606 - 18148.431: 98.5905% ( 5) 00:07:48.343 18148.431 - 18249.255: 98.6014% ( 1) 00:07:48.343 18753.378 - 18854.203: 98.6342% ( 3) 00:07:48.343 18854.203 - 18955.028: 98.6779% ( 4) 00:07:48.343 18955.028 - 19055.852: 98.7216% ( 4) 00:07:48.343 19055.852 - 19156.677: 98.7653% ( 4) 00:07:48.343 19156.677 - 19257.502: 98.8090% ( 4) 00:07:48.343 19257.502 - 19358.326: 98.8309% ( 2) 00:07:48.343 19358.326 - 19459.151: 98.8636% ( 3) 00:07:48.343 19459.151 - 19559.975: 98.9073% ( 4) 00:07:48.343 19559.975 - 19660.800: 98.9510% ( 4) 00:07:48.343 19660.800 - 19761.625: 99.0057% ( 5) 00:07:48.343 19761.625 - 19862.449: 99.0494% ( 4) 00:07:48.343 19862.449 - 19963.274: 99.0931% ( 4) 00:07:48.343 19963.274 - 20064.098: 99.1477% ( 5) 00:07:48.343 20064.098 - 20164.923: 99.1914% ( 4) 00:07:48.343 20164.923 - 20265.748: 99.2351% ( 4) 00:07:48.343 20265.748 - 20366.572: 99.2788% ( 4) 00:07:48.343 20366.572 - 20467.397: 99.3007% ( 2) 00:07:48.343 27222.646 - 27424.295: 99.3772% ( 7) 00:07:48.343 27424.295 - 27625.945: 99.4755% ( 9) 00:07:48.343 27625.945 - 27827.594: 99.5629% ( 8) 00:07:48.343 27827.594 - 28029.243: 99.6613% ( 9) 00:07:48.343 28029.243 - 28230.892: 99.7596% ( 9) 00:07:48.343 28230.892 - 28432.542: 99.8580% ( 9) 00:07:48.343 28432.542 - 28634.191: 99.9563% ( 9) 00:07:48.343 28634.191 - 28835.840: 100.0000% ( 4) 00:07:48.343 00:07:48.343 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.343 ============================================================================== 00:07:48.343 Range in us Cumulative IO count 00:07:48.343 8570.092 - 8620.505: 0.0109% ( 1) 00:07:48.343 8620.505 - 8670.917: 0.0328% ( 2) 00:07:48.343 8670.917 - 8721.329: 0.0546% ( 2) 00:07:48.343 8721.329 - 8771.742: 0.0874% ( 3) 00:07:48.343 8771.742 - 8822.154: 0.1093% ( 2) 00:07:48.343 8822.154 - 8872.566: 0.1420% ( 3) 00:07:48.343 8872.566 - 8922.978: 0.1748% ( 3) 00:07:48.343 8922.978 - 8973.391: 0.1967% ( 2) 00:07:48.343 8973.391 - 9023.803: 0.2185% ( 2) 00:07:48.343 9023.803 - 9074.215: 0.2404% ( 2) 00:07:48.343 9074.215 - 9124.628: 0.2732% ( 3) 00:07:48.343 9124.628 - 9175.040: 0.2950% ( 2) 00:07:48.343 9175.040 - 9225.452: 0.3169% ( 2) 00:07:48.343 9225.452 - 9275.865: 0.3387% ( 2) 00:07:48.343 9275.865 - 9326.277: 0.3715% ( 3) 00:07:48.343 9326.277 - 9376.689: 0.3824% ( 1) 00:07:48.343 9376.689 - 9427.102: 0.4043% ( 2) 00:07:48.343 9427.102 - 9477.514: 0.4261% ( 2) 00:07:48.343 9477.514 - 9527.926: 0.4480% ( 2) 00:07:48.343 9527.926 - 9578.338: 0.4698% ( 2) 00:07:48.343 9578.338 - 9628.751: 0.5026% ( 3) 00:07:48.343 9628.751 - 9679.163: 0.5245% ( 2) 00:07:48.343 9679.163 - 9729.575: 0.5463% ( 2) 00:07:48.343 9729.575 - 9779.988: 0.5791% ( 3) 00:07:48.343 9779.988 - 9830.400: 0.6010% ( 2) 00:07:48.343 9830.400 - 9880.812: 0.6228% ( 2) 00:07:48.343 9880.812 - 9931.225: 0.6447% ( 2) 00:07:48.343 9931.225 - 9981.637: 0.6774% ( 3) 00:07:48.343 9981.637 - 10032.049: 0.6884% ( 1) 00:07:48.343 10032.049 - 10082.462: 0.6993% ( 1) 00:07:48.343 10334.523 - 10384.935: 0.7212% ( 2) 00:07:48.343 10384.935 - 10435.348: 0.8851% ( 15) 00:07:48.343 10435.348 - 10485.760: 0.9397% ( 5) 00:07:48.343 10485.760 - 10536.172: 1.0052% ( 6) 00:07:48.343 10536.172 - 10586.585: 1.1145% ( 10) 00:07:48.343 10586.585 - 10636.997: 1.2019% ( 8) 00:07:48.343 10636.997 - 10687.409: 1.3330% ( 12) 00:07:48.343 10687.409 - 10737.822: 1.5844% ( 23) 00:07:48.343 10737.822 - 10788.234: 1.8575% ( 25) 00:07:48.343 10788.234 - 10838.646: 2.0870% ( 21) 00:07:48.343 10838.646 - 10889.058: 2.2727% ( 17) 00:07:48.343 10889.058 - 10939.471: 2.4585% ( 17) 00:07:48.343 10939.471 - 10989.883: 2.7316% ( 25) 00:07:48.343 10989.883 - 11040.295: 3.0922% ( 33) 00:07:48.343 11040.295 - 11090.708: 3.4419% ( 32) 00:07:48.343 11090.708 - 11141.120: 3.8899% ( 41) 00:07:48.343 11141.120 - 11191.532: 4.3160% ( 39) 00:07:48.343 11191.532 - 11241.945: 4.7858% ( 43) 00:07:48.344 11241.945 - 11292.357: 5.3431% ( 51) 00:07:48.344 11292.357 - 11342.769: 5.9441% ( 55) 00:07:48.344 11342.769 - 11393.182: 6.4795% ( 49) 00:07:48.344 11393.182 - 11443.594: 7.0039% ( 48) 00:07:48.344 11443.594 - 11494.006: 7.6923% ( 63) 00:07:48.344 11494.006 - 11544.418: 8.3370% ( 59) 00:07:48.344 11544.418 - 11594.831: 9.0253% ( 63) 00:07:48.344 11594.831 - 11645.243: 9.7574% ( 67) 00:07:48.344 11645.243 - 11695.655: 10.3584% ( 55) 00:07:48.344 11695.655 - 11746.068: 11.1451% ( 72) 00:07:48.344 11746.068 - 11796.480: 11.9427% ( 73) 00:07:48.344 11796.480 - 11846.892: 12.7404% ( 73) 00:07:48.344 11846.892 - 11897.305: 13.5708% ( 76) 00:07:48.344 11897.305 - 11947.717: 14.4231% ( 78) 00:07:48.344 11947.717 - 11998.129: 15.2535% ( 76) 00:07:48.344 11998.129 - 12048.542: 15.9965% ( 68) 00:07:48.344 12048.542 - 12098.954: 16.7832% ( 72) 00:07:48.344 12098.954 - 12149.366: 17.5481% ( 70) 00:07:48.344 12149.366 - 12199.778: 18.3129% ( 70) 00:07:48.344 12199.778 - 12250.191: 19.0559% ( 68) 00:07:48.344 12250.191 - 12300.603: 19.7662% ( 65) 00:07:48.344 12300.603 - 12351.015: 20.4873% ( 66) 00:07:48.344 12351.015 - 12401.428: 21.0992% ( 56) 00:07:48.344 12401.428 - 12451.840: 21.7330% ( 58) 00:07:48.344 12451.840 - 12502.252: 22.3558% ( 57) 00:07:48.344 12502.252 - 12552.665: 22.9567% ( 55) 00:07:48.344 12552.665 - 12603.077: 23.5686% ( 56) 00:07:48.344 12603.077 - 12653.489: 24.2461% ( 62) 00:07:48.344 12653.489 - 12703.902: 24.8798% ( 58) 00:07:48.344 12703.902 - 12754.314: 25.5573% ( 62) 00:07:48.344 12754.314 - 12804.726: 26.4642% ( 83) 00:07:48.344 12804.726 - 12855.138: 27.2727% ( 74) 00:07:48.344 12855.138 - 12905.551: 28.1796% ( 83) 00:07:48.344 12905.551 - 13006.375: 30.0153% ( 168) 00:07:48.344 13006.375 - 13107.200: 31.8947% ( 172) 00:07:48.344 13107.200 - 13208.025: 33.8724% ( 181) 00:07:48.344 13208.025 - 13308.849: 36.0577% ( 200) 00:07:48.344 13308.849 - 13409.674: 38.3413% ( 209) 00:07:48.344 13409.674 - 13510.498: 40.7124% ( 217) 00:07:48.344 13510.498 - 13611.323: 43.1600% ( 224) 00:07:48.344 13611.323 - 13712.148: 45.6512% ( 228) 00:07:48.344 13712.148 - 13812.972: 48.3064% ( 243) 00:07:48.344 13812.972 - 13913.797: 50.8413% ( 232) 00:07:48.344 13913.797 - 14014.622: 53.2780% ( 223) 00:07:48.344 14014.622 - 14115.446: 55.6053% ( 213) 00:07:48.344 14115.446 - 14216.271: 57.9218% ( 212) 00:07:48.344 14216.271 - 14317.095: 60.2163% ( 210) 00:07:48.344 14317.095 - 14417.920: 62.2487% ( 186) 00:07:48.344 14417.920 - 14518.745: 64.0516% ( 165) 00:07:48.344 14518.745 - 14619.569: 65.9856% ( 177) 00:07:48.344 14619.569 - 14720.394: 67.9087% ( 176) 00:07:48.344 14720.394 - 14821.218: 69.6460% ( 159) 00:07:48.344 14821.218 - 14922.043: 71.3068% ( 152) 00:07:48.344 14922.043 - 15022.868: 72.8693% ( 143) 00:07:48.344 15022.868 - 15123.692: 74.4318% ( 143) 00:07:48.344 15123.692 - 15224.517: 75.8741% ( 132) 00:07:48.344 15224.517 - 15325.342: 77.1525% ( 117) 00:07:48.344 15325.342 - 15426.166: 78.4965% ( 123) 00:07:48.344 15426.166 - 15526.991: 79.9060% ( 129) 00:07:48.344 15526.991 - 15627.815: 81.2172% ( 120) 00:07:48.344 15627.815 - 15728.640: 82.4410% ( 112) 00:07:48.344 15728.640 - 15829.465: 83.7850% ( 123) 00:07:48.344 15829.465 - 15930.289: 85.0634% ( 117) 00:07:48.344 15930.289 - 16031.114: 86.1670% ( 101) 00:07:48.344 16031.114 - 16131.938: 87.4017% ( 113) 00:07:48.344 16131.938 - 16232.763: 88.5708% ( 107) 00:07:48.344 16232.763 - 16333.588: 89.7072% ( 104) 00:07:48.344 16333.588 - 16434.412: 90.7889% ( 99) 00:07:48.344 16434.412 - 16535.237: 91.8925% ( 101) 00:07:48.344 16535.237 - 16636.062: 92.9742% ( 99) 00:07:48.344 16636.062 - 16736.886: 93.9030% ( 85) 00:07:48.344 16736.886 - 16837.711: 94.7662% ( 79) 00:07:48.344 16837.711 - 16938.535: 95.5201% ( 69) 00:07:48.344 16938.535 - 17039.360: 96.1757% ( 60) 00:07:48.344 17039.360 - 17140.185: 96.6892% ( 47) 00:07:48.344 17140.185 - 17241.009: 97.0717% ( 35) 00:07:48.344 17241.009 - 17341.834: 97.4323% ( 33) 00:07:48.344 17341.834 - 17442.658: 97.6617% ( 21) 00:07:48.344 17442.658 - 17543.483: 97.8802% ( 20) 00:07:48.344 17543.483 - 17644.308: 98.0660% ( 17) 00:07:48.344 17644.308 - 17745.132: 98.1643% ( 9) 00:07:48.344 17745.132 - 17845.957: 98.2627% ( 9) 00:07:48.344 17845.957 - 17946.782: 98.3501% ( 8) 00:07:48.344 17946.782 - 18047.606: 98.4156% ( 6) 00:07:48.344 18047.606 - 18148.431: 98.4812% ( 6) 00:07:48.344 18148.431 - 18249.255: 98.5249% ( 4) 00:07:48.344 18249.255 - 18350.080: 98.5905% ( 6) 00:07:48.344 18350.080 - 18450.905: 98.6014% ( 1) 00:07:48.344 19459.151 - 19559.975: 98.6233% ( 2) 00:07:48.344 19559.975 - 19660.800: 98.6670% ( 4) 00:07:48.344 19660.800 - 19761.625: 98.7107% ( 4) 00:07:48.344 19761.625 - 19862.449: 98.7325% ( 2) 00:07:48.344 19862.449 - 19963.274: 98.7872% ( 5) 00:07:48.344 19963.274 - 20064.098: 98.8309% ( 4) 00:07:48.344 20064.098 - 20164.923: 98.8746% ( 4) 00:07:48.344 20164.923 - 20265.748: 98.9292% ( 5) 00:07:48.344 20265.748 - 20366.572: 98.9729% ( 4) 00:07:48.344 20366.572 - 20467.397: 99.0057% ( 3) 00:07:48.344 20467.397 - 20568.222: 99.0603% ( 5) 00:07:48.344 20568.222 - 20669.046: 99.1040% ( 4) 00:07:48.344 20669.046 - 20769.871: 99.1477% ( 4) 00:07:48.344 20769.871 - 20870.695: 99.2024% ( 5) 00:07:48.344 20870.695 - 20971.520: 99.2351% ( 3) 00:07:48.344 20971.520 - 21072.345: 99.2788% ( 4) 00:07:48.344 21072.345 - 21173.169: 99.3007% ( 2) 00:07:48.344 27020.997 - 27222.646: 99.3116% ( 1) 00:07:48.344 27222.646 - 27424.295: 99.3990% ( 8) 00:07:48.344 27424.295 - 27625.945: 99.4974% ( 9) 00:07:48.344 27625.945 - 27827.594: 99.5848% ( 8) 00:07:48.344 27827.594 - 28029.243: 99.6831% ( 9) 00:07:48.344 28029.243 - 28230.892: 99.7815% ( 9) 00:07:48.344 28230.892 - 28432.542: 99.8798% ( 9) 00:07:48.344 28432.542 - 28634.191: 99.9672% ( 8) 00:07:48.344 28634.191 - 28835.840: 100.0000% ( 3) 00:07:48.344 00:07:48.344 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.344 ============================================================================== 00:07:48.344 Range in us Cumulative IO count 00:07:48.344 7410.609 - 7461.022: 0.0219% ( 2) 00:07:48.344 7461.022 - 7511.434: 0.0437% ( 2) 00:07:48.344 7511.434 - 7561.846: 0.0546% ( 1) 00:07:48.344 7561.846 - 7612.258: 0.0765% ( 2) 00:07:48.344 7612.258 - 7662.671: 0.0983% ( 2) 00:07:48.344 7662.671 - 7713.083: 0.1202% ( 2) 00:07:48.344 7713.083 - 7763.495: 0.1311% ( 1) 00:07:48.344 7763.495 - 7813.908: 0.1639% ( 3) 00:07:48.344 7813.908 - 7864.320: 0.1748% ( 1) 00:07:48.344 7864.320 - 7914.732: 0.1967% ( 2) 00:07:48.344 7914.732 - 7965.145: 0.2185% ( 2) 00:07:48.344 7965.145 - 8015.557: 0.2404% ( 2) 00:07:48.344 8015.557 - 8065.969: 0.2513% ( 1) 00:07:48.344 8065.969 - 8116.382: 0.2841% ( 3) 00:07:48.344 8116.382 - 8166.794: 0.2950% ( 1) 00:07:48.344 8166.794 - 8217.206: 0.3169% ( 2) 00:07:48.344 8217.206 - 8267.618: 0.3387% ( 2) 00:07:48.344 8267.618 - 8318.031: 0.3606% ( 2) 00:07:48.344 8318.031 - 8368.443: 0.3824% ( 2) 00:07:48.344 8368.443 - 8418.855: 0.4043% ( 2) 00:07:48.344 8418.855 - 8469.268: 0.4261% ( 2) 00:07:48.344 8620.505 - 8670.917: 0.4808% ( 5) 00:07:48.344 8670.917 - 8721.329: 0.5245% ( 4) 00:07:48.344 8721.329 - 8771.742: 0.5463% ( 2) 00:07:48.344 8771.742 - 8822.154: 0.5573% ( 1) 00:07:48.344 8822.154 - 8872.566: 0.5791% ( 2) 00:07:48.344 8872.566 - 8922.978: 0.6228% ( 4) 00:07:48.344 8922.978 - 8973.391: 0.6556% ( 3) 00:07:48.344 8973.391 - 9023.803: 0.6774% ( 2) 00:07:48.344 9023.803 - 9074.215: 0.6993% ( 2) 00:07:48.344 10132.874 - 10183.286: 0.7212% ( 2) 00:07:48.344 10183.286 - 10233.698: 0.7539% ( 3) 00:07:48.344 10233.698 - 10284.111: 0.7976% ( 4) 00:07:48.344 10284.111 - 10334.523: 0.8195% ( 2) 00:07:48.344 10334.523 - 10384.935: 0.8523% ( 3) 00:07:48.344 10384.935 - 10435.348: 0.8851% ( 3) 00:07:48.344 10435.348 - 10485.760: 0.9069% ( 2) 00:07:48.344 10485.760 - 10536.172: 0.9288% ( 2) 00:07:48.344 10536.172 - 10586.585: 1.0052% ( 7) 00:07:48.344 10586.585 - 10636.997: 1.1364% ( 12) 00:07:48.344 10636.997 - 10687.409: 1.3658% ( 21) 00:07:48.344 10687.409 - 10737.822: 1.5297% ( 15) 00:07:48.344 10737.822 - 10788.234: 1.7920% ( 24) 00:07:48.344 10788.234 - 10838.646: 1.9777% ( 17) 00:07:48.344 10838.646 - 10889.058: 2.3274% ( 32) 00:07:48.344 10889.058 - 10939.471: 2.6552% ( 30) 00:07:48.344 10939.471 - 10989.883: 3.0813% ( 39) 00:07:48.344 10989.883 - 11040.295: 3.3326% ( 23) 00:07:48.344 11040.295 - 11090.708: 3.7697% ( 40) 00:07:48.344 11090.708 - 11141.120: 4.1521% ( 35) 00:07:48.344 11141.120 - 11191.532: 4.5782% ( 39) 00:07:48.344 11191.532 - 11241.945: 5.0809% ( 46) 00:07:48.344 11241.945 - 11292.357: 5.5507% ( 43) 00:07:48.344 11292.357 - 11342.769: 6.0970% ( 50) 00:07:48.344 11342.769 - 11393.182: 6.7089% ( 56) 00:07:48.344 11393.182 - 11443.594: 7.3427% ( 58) 00:07:48.344 11443.594 - 11494.006: 7.9436% ( 55) 00:07:48.344 11494.006 - 11544.418: 8.7959% ( 78) 00:07:48.344 11544.418 - 11594.831: 9.4515% ( 60) 00:07:48.344 11594.831 - 11645.243: 10.1726% ( 66) 00:07:48.344 11645.243 - 11695.655: 10.9156% ( 68) 00:07:48.344 11695.655 - 11746.068: 11.7570% ( 77) 00:07:48.344 11746.068 - 11796.480: 12.3798% ( 57) 00:07:48.344 11796.480 - 11846.892: 12.9698% ( 54) 00:07:48.344 11846.892 - 11897.305: 13.7456% ( 71) 00:07:48.344 11897.305 - 11947.717: 14.4449% ( 64) 00:07:48.344 11947.717 - 11998.129: 15.1552% ( 65) 00:07:48.344 11998.129 - 12048.542: 15.8435% ( 63) 00:07:48.344 12048.542 - 12098.954: 16.6302% ( 72) 00:07:48.344 12098.954 - 12149.366: 17.3733% ( 68) 00:07:48.344 12149.366 - 12199.778: 18.1927% ( 75) 00:07:48.344 12199.778 - 12250.191: 19.0450% ( 78) 00:07:48.344 12250.191 - 12300.603: 19.8754% ( 76) 00:07:48.344 12300.603 - 12351.015: 20.6949% ( 75) 00:07:48.345 12351.015 - 12401.428: 21.4379% ( 68) 00:07:48.345 12401.428 - 12451.840: 22.0935% ( 60) 00:07:48.345 12451.840 - 12502.252: 22.9458% ( 78) 00:07:48.345 12502.252 - 12552.665: 23.9292% ( 90) 00:07:48.345 12552.665 - 12603.077: 24.6066% ( 62) 00:07:48.345 12603.077 - 12653.489: 25.4589% ( 78) 00:07:48.345 12653.489 - 12703.902: 26.3221% ( 79) 00:07:48.345 12703.902 - 12754.314: 26.9668% ( 59) 00:07:48.345 12754.314 - 12804.726: 27.8409% ( 80) 00:07:48.345 12804.726 - 12855.138: 28.5402% ( 64) 00:07:48.345 12855.138 - 12905.551: 29.2067% ( 61) 00:07:48.345 12905.551 - 13006.375: 30.8239% ( 148) 00:07:48.345 13006.375 - 13107.200: 32.6158% ( 164) 00:07:48.345 13107.200 - 13208.025: 34.4078% ( 164) 00:07:48.345 13208.025 - 13308.849: 36.2434% ( 168) 00:07:48.345 13308.849 - 13409.674: 38.3741% ( 195) 00:07:48.345 13409.674 - 13510.498: 40.2863% ( 175) 00:07:48.345 13510.498 - 13611.323: 42.7010% ( 221) 00:07:48.345 13611.323 - 13712.148: 44.8427% ( 196) 00:07:48.345 13712.148 - 13812.972: 46.8859% ( 187) 00:07:48.345 13812.972 - 13913.797: 49.2461% ( 216) 00:07:48.345 13913.797 - 14014.622: 51.7592% ( 230) 00:07:48.345 14014.622 - 14115.446: 54.3925% ( 241) 00:07:48.345 14115.446 - 14216.271: 56.9930% ( 238) 00:07:48.345 14216.271 - 14317.095: 59.6045% ( 239) 00:07:48.345 14317.095 - 14417.920: 61.5166% ( 175) 00:07:48.345 14417.920 - 14518.745: 63.7675% ( 206) 00:07:48.345 14518.745 - 14619.569: 65.5485% ( 163) 00:07:48.345 14619.569 - 14720.394: 67.4716% ( 176) 00:07:48.345 14720.394 - 14821.218: 69.2198% ( 160) 00:07:48.345 14821.218 - 14922.043: 70.9135% ( 155) 00:07:48.345 14922.043 - 15022.868: 72.7273% ( 166) 00:07:48.345 15022.868 - 15123.692: 74.3990% ( 153) 00:07:48.345 15123.692 - 15224.517: 75.9288% ( 140) 00:07:48.345 15224.517 - 15325.342: 77.4148% ( 136) 00:07:48.345 15325.342 - 15426.166: 78.8680% ( 133) 00:07:48.345 15426.166 - 15526.991: 80.2666% ( 128) 00:07:48.345 15526.991 - 15627.815: 82.0149% ( 160) 00:07:48.345 15627.815 - 15728.640: 83.5337% ( 139) 00:07:48.345 15728.640 - 15829.465: 84.9213% ( 127) 00:07:48.345 15829.465 - 15930.289: 86.1233% ( 110) 00:07:48.345 15930.289 - 16031.114: 87.4781% ( 124) 00:07:48.345 16031.114 - 16131.938: 88.8658% ( 127) 00:07:48.345 16131.938 - 16232.763: 90.0240% ( 106) 00:07:48.345 16232.763 - 16333.588: 91.0730% ( 96) 00:07:48.345 16333.588 - 16434.412: 92.0236% ( 87) 00:07:48.345 16434.412 - 16535.237: 92.9305% ( 83) 00:07:48.345 16535.237 - 16636.062: 93.8046% ( 80) 00:07:48.345 16636.062 - 16736.886: 94.5149% ( 65) 00:07:48.345 16736.886 - 16837.711: 95.1486% ( 58) 00:07:48.345 16837.711 - 16938.535: 95.5529% ( 37) 00:07:48.345 16938.535 - 17039.360: 95.9899% ( 40) 00:07:48.345 17039.360 - 17140.185: 96.4489% ( 42) 00:07:48.345 17140.185 - 17241.009: 96.7876% ( 31) 00:07:48.345 17241.009 - 17341.834: 97.0826% ( 27) 00:07:48.345 17341.834 - 17442.658: 97.1809% ( 9) 00:07:48.345 17442.658 - 17543.483: 97.2574% ( 7) 00:07:48.345 17543.483 - 17644.308: 97.4432% ( 17) 00:07:48.345 17644.308 - 17745.132: 97.5852% ( 13) 00:07:48.345 17745.132 - 17845.957: 97.7928% ( 19) 00:07:48.345 17845.957 - 17946.782: 97.9677% ( 16) 00:07:48.345 17946.782 - 18047.606: 98.0114% ( 4) 00:07:48.345 18047.606 - 18148.431: 98.1643% ( 14) 00:07:48.345 18148.431 - 18249.255: 98.2627% ( 9) 00:07:48.345 18249.255 - 18350.080: 98.3392% ( 7) 00:07:48.345 18350.080 - 18450.905: 98.3610% ( 2) 00:07:48.345 18450.905 - 18551.729: 98.4375% ( 7) 00:07:48.345 18551.729 - 18652.554: 98.4812% ( 4) 00:07:48.345 18652.554 - 18753.378: 98.5249% ( 4) 00:07:48.345 18753.378 - 18854.203: 98.5905% ( 6) 00:07:48.345 18854.203 - 18955.028: 98.6014% ( 1) 00:07:48.345 19358.326 - 19459.151: 98.6123% ( 1) 00:07:48.345 19459.151 - 19559.975: 98.6560% ( 4) 00:07:48.345 19559.975 - 19660.800: 98.6888% ( 3) 00:07:48.345 19660.800 - 19761.625: 98.7325% ( 4) 00:07:48.345 19761.625 - 19862.449: 98.7653% ( 3) 00:07:48.345 19862.449 - 19963.274: 98.8090% ( 4) 00:07:48.345 19963.274 - 20064.098: 98.8527% ( 4) 00:07:48.345 20064.098 - 20164.923: 98.8855% ( 3) 00:07:48.345 20164.923 - 20265.748: 98.9292% ( 4) 00:07:48.345 20265.748 - 20366.572: 98.9620% ( 3) 00:07:48.345 20366.572 - 20467.397: 99.0057% ( 4) 00:07:48.345 20467.397 - 20568.222: 99.0494% ( 4) 00:07:48.345 20568.222 - 20669.046: 99.0931% ( 4) 00:07:48.345 20669.046 - 20769.871: 99.1368% ( 4) 00:07:48.345 20769.871 - 20870.695: 99.1696% ( 3) 00:07:48.345 20870.695 - 20971.520: 99.2024% ( 3) 00:07:48.345 20971.520 - 21072.345: 99.2461% ( 4) 00:07:48.345 21072.345 - 21173.169: 99.2788% ( 3) 00:07:48.345 21173.169 - 21273.994: 99.3007% ( 2) 00:07:48.345 27625.945 - 27827.594: 99.3663% ( 6) 00:07:48.345 27827.594 - 28029.243: 99.4865% ( 11) 00:07:48.345 28029.243 - 28230.892: 99.6066% ( 11) 00:07:48.345 28230.892 - 28432.542: 99.7268% ( 11) 00:07:48.345 28432.542 - 28634.191: 99.8580% ( 12) 00:07:48.345 28634.191 - 28835.840: 100.0000% ( 13) 00:07:48.345 00:07:48.345 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.345 ============================================================================== 00:07:48.345 Range in us Cumulative IO count 00:07:48.345 6503.188 - 6553.600: 0.0219% ( 2) 00:07:48.345 6553.600 - 6604.012: 0.0437% ( 2) 00:07:48.345 6604.012 - 6654.425: 0.0765% ( 3) 00:07:48.345 6654.425 - 6704.837: 0.0983% ( 2) 00:07:48.345 6704.837 - 6755.249: 0.1202% ( 2) 00:07:48.345 6755.249 - 6805.662: 0.1530% ( 3) 00:07:48.345 6805.662 - 6856.074: 0.1748% ( 2) 00:07:48.345 6856.074 - 6906.486: 0.1967% ( 2) 00:07:48.345 6906.486 - 6956.898: 0.2185% ( 2) 00:07:48.345 6956.898 - 7007.311: 0.2513% ( 3) 00:07:48.345 7007.311 - 7057.723: 0.2732% ( 2) 00:07:48.345 7057.723 - 7108.135: 0.2950% ( 2) 00:07:48.345 7108.135 - 7158.548: 0.3278% ( 3) 00:07:48.345 7158.548 - 7208.960: 0.3497% ( 2) 00:07:48.345 7208.960 - 7259.372: 0.3715% ( 2) 00:07:48.345 7259.372 - 7309.785: 0.4043% ( 3) 00:07:48.345 7309.785 - 7360.197: 0.4261% ( 2) 00:07:48.345 7360.197 - 7410.609: 0.4480% ( 2) 00:07:48.345 7410.609 - 7461.022: 0.4698% ( 2) 00:07:48.345 7461.022 - 7511.434: 0.4917% ( 2) 00:07:48.345 7511.434 - 7561.846: 0.5135% ( 2) 00:07:48.345 7561.846 - 7612.258: 0.5463% ( 3) 00:07:48.345 7612.258 - 7662.671: 0.5682% ( 2) 00:07:48.345 7662.671 - 7713.083: 0.5900% ( 2) 00:07:48.345 7713.083 - 7763.495: 0.6228% ( 3) 00:07:48.345 7763.495 - 7813.908: 0.6447% ( 2) 00:07:48.345 7864.320 - 7914.732: 0.6665% ( 2) 00:07:48.345 7914.732 - 7965.145: 0.6993% ( 3) 00:07:48.345 10233.698 - 10284.111: 0.7102% ( 1) 00:07:48.345 10284.111 - 10334.523: 0.7649% ( 5) 00:07:48.345 10334.523 - 10384.935: 0.8413% ( 7) 00:07:48.345 10384.935 - 10435.348: 0.9288% ( 8) 00:07:48.345 10435.348 - 10485.760: 1.0162% ( 8) 00:07:48.345 10485.760 - 10536.172: 1.0927% ( 7) 00:07:48.345 10536.172 - 10586.585: 1.2019% ( 10) 00:07:48.345 10586.585 - 10636.997: 1.3330% ( 12) 00:07:48.345 10636.997 - 10687.409: 1.4642% ( 12) 00:07:48.345 10687.409 - 10737.822: 1.6390% ( 16) 00:07:48.345 10737.822 - 10788.234: 1.8684% ( 21) 00:07:48.345 10788.234 - 10838.646: 2.1307% ( 24) 00:07:48.345 10838.646 - 10889.058: 2.4476% ( 29) 00:07:48.345 10889.058 - 10939.471: 2.7753% ( 30) 00:07:48.345 10939.471 - 10989.883: 3.0922% ( 29) 00:07:48.345 10989.883 - 11040.295: 3.4747% ( 35) 00:07:48.345 11040.295 - 11090.708: 3.8789% ( 37) 00:07:48.345 11090.708 - 11141.120: 4.2723% ( 36) 00:07:48.345 11141.120 - 11191.532: 4.6329% ( 33) 00:07:48.345 11191.532 - 11241.945: 5.0809% ( 41) 00:07:48.345 11241.945 - 11292.357: 5.5507% ( 43) 00:07:48.345 11292.357 - 11342.769: 6.0205% ( 43) 00:07:48.345 11342.769 - 11393.182: 6.5559% ( 49) 00:07:48.345 11393.182 - 11443.594: 7.1569% ( 55) 00:07:48.345 11443.594 - 11494.006: 7.7469% ( 54) 00:07:48.345 11494.006 - 11544.418: 8.3588% ( 56) 00:07:48.345 11544.418 - 11594.831: 9.0144% ( 60) 00:07:48.345 11594.831 - 11645.243: 9.6154% ( 55) 00:07:48.345 11645.243 - 11695.655: 10.1726% ( 51) 00:07:48.345 11695.655 - 11746.068: 10.7627% ( 54) 00:07:48.345 11746.068 - 11796.480: 11.5166% ( 69) 00:07:48.345 11796.480 - 11846.892: 12.2815% ( 70) 00:07:48.345 11846.892 - 11897.305: 13.0573% ( 71) 00:07:48.345 11897.305 - 11947.717: 13.8221% ( 70) 00:07:48.345 11947.717 - 11998.129: 14.4777% ( 60) 00:07:48.345 11998.129 - 12048.542: 15.2535% ( 71) 00:07:48.345 12048.542 - 12098.954: 16.1167% ( 79) 00:07:48.345 12098.954 - 12149.366: 17.0455% ( 85) 00:07:48.345 12149.366 - 12199.778: 17.8868% ( 77) 00:07:48.345 12199.778 - 12250.191: 18.7172% ( 76) 00:07:48.345 12250.191 - 12300.603: 19.6460% ( 85) 00:07:48.345 12300.603 - 12351.015: 20.5420% ( 82) 00:07:48.345 12351.015 - 12401.428: 21.5253% ( 90) 00:07:48.345 12401.428 - 12451.840: 22.6071% ( 99) 00:07:48.345 12451.840 - 12502.252: 23.5795% ( 89) 00:07:48.345 12502.252 - 12552.665: 24.4865% ( 83) 00:07:48.345 12552.665 - 12603.077: 25.4480% ( 88) 00:07:48.345 12603.077 - 12653.489: 26.4205% ( 89) 00:07:48.345 12653.489 - 12703.902: 27.4694% ( 96) 00:07:48.345 12703.902 - 12754.314: 28.2889% ( 75) 00:07:48.345 12754.314 - 12804.726: 29.1084% ( 75) 00:07:48.345 12804.726 - 12855.138: 29.9716% ( 79) 00:07:48.345 12855.138 - 12905.551: 30.7911% ( 75) 00:07:48.345 12905.551 - 13006.375: 32.4956% ( 156) 00:07:48.345 13006.375 - 13107.200: 34.3531% ( 170) 00:07:48.345 13107.200 - 13208.025: 36.1014% ( 160) 00:07:48.345 13208.025 - 13308.849: 37.6967% ( 146) 00:07:48.345 13308.849 - 13409.674: 39.5760% ( 172) 00:07:48.345 13409.674 - 13510.498: 41.4773% ( 174) 00:07:48.345 13510.498 - 13611.323: 43.4768% ( 183) 00:07:48.345 13611.323 - 13712.148: 45.4655% ( 182) 00:07:48.345 13712.148 - 13812.972: 47.4978% ( 186) 00:07:48.345 13812.972 - 13913.797: 49.8033% ( 211) 00:07:48.345 13913.797 - 14014.622: 51.9449% ( 196) 00:07:48.346 14014.622 - 14115.446: 54.0756% ( 195) 00:07:48.346 14115.446 - 14216.271: 56.2609% ( 200) 00:07:48.346 14216.271 - 14317.095: 58.2933% ( 186) 00:07:48.346 14317.095 - 14417.920: 60.2819% ( 182) 00:07:48.346 14417.920 - 14518.745: 62.2050% ( 176) 00:07:48.346 14518.745 - 14619.569: 64.0406% ( 168) 00:07:48.346 14619.569 - 14720.394: 66.1713% ( 195) 00:07:48.346 14720.394 - 14821.218: 67.9851% ( 166) 00:07:48.346 14821.218 - 14922.043: 69.9519% ( 180) 00:07:48.346 14922.043 - 15022.868: 71.8641% ( 175) 00:07:48.346 15022.868 - 15123.692: 73.9510% ( 191) 00:07:48.346 15123.692 - 15224.517: 75.9943% ( 187) 00:07:48.346 15224.517 - 15325.342: 78.0376% ( 187) 00:07:48.346 15325.342 - 15426.166: 79.9279% ( 173) 00:07:48.346 15426.166 - 15526.991: 81.4576% ( 140) 00:07:48.346 15526.991 - 15627.815: 82.9436% ( 136) 00:07:48.346 15627.815 - 15728.640: 84.2220% ( 117) 00:07:48.346 15728.640 - 15829.465: 85.4786% ( 115) 00:07:48.346 15829.465 - 15930.289: 86.8226% ( 123) 00:07:48.346 15930.289 - 16031.114: 88.1774% ( 124) 00:07:48.346 16031.114 - 16131.938: 89.2373% ( 97) 00:07:48.346 16131.938 - 16232.763: 90.2098% ( 89) 00:07:48.346 16232.763 - 16333.588: 91.1167% ( 83) 00:07:48.346 16333.588 - 16434.412: 92.0345% ( 84) 00:07:48.346 16434.412 - 16535.237: 92.8649% ( 76) 00:07:48.346 16535.237 - 16636.062: 93.6298% ( 70) 00:07:48.346 16636.062 - 16736.886: 94.3182% ( 63) 00:07:48.346 16736.886 - 16837.711: 94.9082% ( 54) 00:07:48.346 16837.711 - 16938.535: 95.3999% ( 45) 00:07:48.346 16938.535 - 17039.360: 95.8807% ( 44) 00:07:48.346 17039.360 - 17140.185: 96.3177% ( 40) 00:07:48.346 17140.185 - 17241.009: 96.6237% ( 28) 00:07:48.346 17241.009 - 17341.834: 96.8313% ( 19) 00:07:48.346 17341.834 - 17442.658: 96.9733% ( 13) 00:07:48.346 17442.658 - 17543.483: 97.1154% ( 13) 00:07:48.346 17543.483 - 17644.308: 97.1919% ( 7) 00:07:48.346 17644.308 - 17745.132: 97.2356% ( 4) 00:07:48.346 17745.132 - 17845.957: 97.3448% ( 10) 00:07:48.346 17845.957 - 17946.782: 97.4650% ( 11) 00:07:48.346 17946.782 - 18047.606: 97.5852% ( 11) 00:07:48.346 18047.606 - 18148.431: 97.7273% ( 13) 00:07:48.346 18148.431 - 18249.255: 97.8365% ( 10) 00:07:48.346 18249.255 - 18350.080: 97.9786% ( 13) 00:07:48.346 18350.080 - 18450.905: 98.1097% ( 12) 00:07:48.346 18450.905 - 18551.729: 98.2408% ( 12) 00:07:48.346 18551.729 - 18652.554: 98.3719% ( 12) 00:07:48.346 18652.554 - 18753.378: 98.4812% ( 10) 00:07:48.346 18753.378 - 18854.203: 98.5795% ( 9) 00:07:48.346 18854.203 - 18955.028: 98.6014% ( 2) 00:07:48.346 19559.975 - 19660.800: 98.6451% ( 4) 00:07:48.346 19660.800 - 19761.625: 98.6888% ( 4) 00:07:48.346 19761.625 - 19862.449: 98.7325% ( 4) 00:07:48.346 19862.449 - 19963.274: 98.7762% ( 4) 00:07:48.346 19963.274 - 20064.098: 98.8309% ( 5) 00:07:48.346 20064.098 - 20164.923: 98.8746% ( 4) 00:07:48.346 20164.923 - 20265.748: 98.9183% ( 4) 00:07:48.346 20265.748 - 20366.572: 98.9729% ( 5) 00:07:48.346 20366.572 - 20467.397: 99.0166% ( 4) 00:07:48.346 20467.397 - 20568.222: 99.0603% ( 4) 00:07:48.346 20568.222 - 20669.046: 99.1040% ( 4) 00:07:48.346 20669.046 - 20769.871: 99.1587% ( 5) 00:07:48.346 20769.871 - 20870.695: 99.2024% ( 4) 00:07:48.346 20870.695 - 20971.520: 99.2461% ( 4) 00:07:48.346 20971.520 - 21072.345: 99.2898% ( 4) 00:07:48.346 21072.345 - 21173.169: 99.3007% ( 1) 00:07:48.346 27625.945 - 27827.594: 99.3444% ( 4) 00:07:48.346 27827.594 - 28029.243: 99.4865% ( 13) 00:07:48.346 28029.243 - 28230.892: 99.6176% ( 12) 00:07:48.346 28230.892 - 28432.542: 99.7596% ( 13) 00:07:48.346 28432.542 - 28634.191: 99.9126% ( 14) 00:07:48.346 28634.191 - 28835.840: 100.0000% ( 8) 00:07:48.346 00:07:48.346 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.346 ============================================================================== 00:07:48.346 Range in us Cumulative IO count 00:07:48.346 5167.262 - 5192.468: 0.0328% ( 3) 00:07:48.346 5192.468 - 5217.674: 0.0437% ( 1) 00:07:48.346 5217.674 - 5242.880: 0.0546% ( 1) 00:07:48.346 5242.880 - 5268.086: 0.0656% ( 1) 00:07:48.346 5268.086 - 5293.292: 0.0765% ( 1) 00:07:48.346 5293.292 - 5318.498: 0.0874% ( 1) 00:07:48.346 5318.498 - 5343.705: 0.0983% ( 1) 00:07:48.346 5343.705 - 5368.911: 0.1202% ( 2) 00:07:48.346 5368.911 - 5394.117: 0.1311% ( 1) 00:07:48.346 5394.117 - 5419.323: 0.1420% ( 1) 00:07:48.346 5419.323 - 5444.529: 0.1530% ( 1) 00:07:48.346 5444.529 - 5469.735: 0.1639% ( 1) 00:07:48.346 5469.735 - 5494.942: 0.1748% ( 1) 00:07:48.346 5494.942 - 5520.148: 0.1967% ( 2) 00:07:48.346 5520.148 - 5545.354: 0.2076% ( 1) 00:07:48.346 5545.354 - 5570.560: 0.2185% ( 1) 00:07:48.346 5570.560 - 5595.766: 0.2295% ( 1) 00:07:48.346 5595.766 - 5620.972: 0.2404% ( 1) 00:07:48.346 5620.972 - 5646.178: 0.2622% ( 2) 00:07:48.346 5646.178 - 5671.385: 0.2732% ( 1) 00:07:48.346 5671.385 - 5696.591: 0.2841% ( 1) 00:07:48.346 5696.591 - 5721.797: 0.2950% ( 1) 00:07:48.346 5721.797 - 5747.003: 0.3059% ( 1) 00:07:48.346 5747.003 - 5772.209: 0.3169% ( 1) 00:07:48.346 5772.209 - 5797.415: 0.3278% ( 1) 00:07:48.346 5797.415 - 5822.622: 0.3497% ( 2) 00:07:48.346 5822.622 - 5847.828: 0.3606% ( 1) 00:07:48.346 5847.828 - 5873.034: 0.3715% ( 1) 00:07:48.346 5873.034 - 5898.240: 0.3824% ( 1) 00:07:48.346 5898.240 - 5923.446: 0.3934% ( 1) 00:07:48.346 5923.446 - 5948.652: 0.4043% ( 1) 00:07:48.346 5948.652 - 5973.858: 0.4261% ( 2) 00:07:48.346 5973.858 - 5999.065: 0.4371% ( 1) 00:07:48.346 5999.065 - 6024.271: 0.4480% ( 1) 00:07:48.346 6024.271 - 6049.477: 0.4589% ( 1) 00:07:48.346 6049.477 - 6074.683: 0.4698% ( 1) 00:07:48.346 6074.683 - 6099.889: 0.4808% ( 1) 00:07:48.346 6099.889 - 6125.095: 0.4917% ( 1) 00:07:48.346 6125.095 - 6150.302: 0.5026% ( 1) 00:07:48.346 6150.302 - 6175.508: 0.5245% ( 2) 00:07:48.346 6175.508 - 6200.714: 0.5354% ( 1) 00:07:48.346 6200.714 - 6225.920: 0.5463% ( 1) 00:07:48.346 6225.920 - 6251.126: 0.5573% ( 1) 00:07:48.346 6251.126 - 6276.332: 0.5682% ( 1) 00:07:48.346 6276.332 - 6301.538: 0.5791% ( 1) 00:07:48.346 6301.538 - 6326.745: 0.5900% ( 1) 00:07:48.346 6351.951 - 6377.157: 0.6010% ( 1) 00:07:48.346 6377.157 - 6402.363: 0.6119% ( 1) 00:07:48.346 6402.363 - 6427.569: 0.6228% ( 1) 00:07:48.346 6427.569 - 6452.775: 0.6447% ( 2) 00:07:48.346 6452.775 - 6503.188: 0.6665% ( 2) 00:07:48.346 6503.188 - 6553.600: 0.6884% ( 2) 00:07:48.346 6553.600 - 6604.012: 0.6993% ( 1) 00:07:48.346 9880.812 - 9931.225: 0.7102% ( 1) 00:07:48.346 9931.225 - 9981.637: 0.7321% ( 2) 00:07:48.346 9981.637 - 10032.049: 0.7539% ( 2) 00:07:48.346 10032.049 - 10082.462: 0.7649% ( 1) 00:07:48.346 10082.462 - 10132.874: 0.8086% ( 4) 00:07:48.346 10132.874 - 10183.286: 0.8851% ( 7) 00:07:48.346 10183.286 - 10233.698: 0.9615% ( 7) 00:07:48.346 10233.698 - 10284.111: 1.0271% ( 6) 00:07:48.346 10284.111 - 10334.523: 1.1582% ( 12) 00:07:48.346 10334.523 - 10384.935: 1.2784% ( 11) 00:07:48.346 10384.935 - 10435.348: 1.3767% ( 9) 00:07:48.346 10435.348 - 10485.760: 1.5188% ( 13) 00:07:48.346 10485.760 - 10536.172: 1.7483% ( 21) 00:07:48.346 10536.172 - 10586.585: 1.9668% ( 20) 00:07:48.346 10586.585 - 10636.997: 2.2837% ( 29) 00:07:48.346 10636.997 - 10687.409: 2.5459% ( 24) 00:07:48.346 10687.409 - 10737.822: 2.8737% ( 30) 00:07:48.346 10737.822 - 10788.234: 3.1906% ( 29) 00:07:48.346 10788.234 - 10838.646: 3.5074% ( 29) 00:07:48.346 10838.646 - 10889.058: 3.8789% ( 34) 00:07:48.346 10889.058 - 10939.471: 4.2504% ( 34) 00:07:48.346 10939.471 - 10989.883: 4.6110% ( 33) 00:07:48.346 10989.883 - 11040.295: 4.9716% ( 33) 00:07:48.346 11040.295 - 11090.708: 5.3322% ( 33) 00:07:48.346 11090.708 - 11141.120: 5.6927% ( 33) 00:07:48.346 11141.120 - 11191.532: 6.1954% ( 46) 00:07:48.346 11191.532 - 11241.945: 6.7417% ( 50) 00:07:48.346 11241.945 - 11292.357: 7.2225% ( 44) 00:07:48.346 11292.357 - 11342.769: 7.7797% ( 51) 00:07:48.346 11342.769 - 11393.182: 8.4135% ( 58) 00:07:48.346 11393.182 - 11443.594: 9.0472% ( 58) 00:07:48.346 11443.594 - 11494.006: 9.6482% ( 55) 00:07:48.346 11494.006 - 11544.418: 10.2601% ( 56) 00:07:48.346 11544.418 - 11594.831: 10.8829% ( 57) 00:07:48.346 11594.831 - 11645.243: 11.4510% ( 52) 00:07:48.346 11645.243 - 11695.655: 12.0411% ( 54) 00:07:48.346 11695.655 - 11746.068: 12.5546% ( 47) 00:07:48.346 11746.068 - 11796.480: 13.0682% ( 47) 00:07:48.346 11796.480 - 11846.892: 13.5927% ( 48) 00:07:48.346 11846.892 - 11897.305: 14.1827% ( 54) 00:07:48.346 11897.305 - 11947.717: 14.7072% ( 48) 00:07:48.346 11947.717 - 11998.129: 15.3628% ( 60) 00:07:48.346 11998.129 - 12048.542: 15.9528% ( 54) 00:07:48.346 12048.542 - 12098.954: 16.5975% ( 59) 00:07:48.346 12098.954 - 12149.366: 17.2968% ( 64) 00:07:48.346 12149.366 - 12199.778: 18.0507% ( 69) 00:07:48.346 12199.778 - 12250.191: 18.8483% ( 73) 00:07:48.346 12250.191 - 12300.603: 19.7225% ( 80) 00:07:48.346 12300.603 - 12351.015: 20.5857% ( 79) 00:07:48.346 12351.015 - 12401.428: 21.4707% ( 81) 00:07:48.346 12401.428 - 12451.840: 22.3339% ( 79) 00:07:48.346 12451.840 - 12502.252: 23.1971% ( 79) 00:07:48.346 12502.252 - 12552.665: 24.1040% ( 83) 00:07:48.346 12552.665 - 12603.077: 25.0983% ( 91) 00:07:48.346 12603.077 - 12653.489: 26.1145% ( 93) 00:07:48.346 12653.489 - 12703.902: 27.1088% ( 91) 00:07:48.346 12703.902 - 12754.314: 28.0813% ( 89) 00:07:48.346 12754.314 - 12804.726: 29.0756% ( 91) 00:07:48.346 12804.726 - 12855.138: 30.2010% ( 103) 00:07:48.346 12855.138 - 12905.551: 31.2500% ( 96) 00:07:48.346 12905.551 - 13006.375: 33.5118% ( 207) 00:07:48.347 13006.375 - 13107.200: 35.8064% ( 210) 00:07:48.347 13107.200 - 13208.025: 37.8715% ( 189) 00:07:48.347 13208.025 - 13308.849: 39.7509% ( 172) 00:07:48.347 13308.849 - 13409.674: 41.4554% ( 156) 00:07:48.347 13409.674 - 13510.498: 43.4003% ( 178) 00:07:48.347 13510.498 - 13611.323: 45.4327% ( 186) 00:07:48.347 13611.323 - 13712.148: 47.4104% ( 181) 00:07:48.347 13712.148 - 13812.972: 49.3553% ( 178) 00:07:48.347 13812.972 - 13913.797: 51.2566% ( 174) 00:07:48.347 13913.797 - 14014.622: 53.1141% ( 170) 00:07:48.347 14014.622 - 14115.446: 55.2010% ( 191) 00:07:48.347 14115.446 - 14216.271: 57.1132% ( 175) 00:07:48.347 14216.271 - 14317.095: 58.8833% ( 162) 00:07:48.347 14317.095 - 14417.920: 60.4240% ( 141) 00:07:48.347 14417.920 - 14518.745: 62.1503% ( 158) 00:07:48.347 14518.745 - 14619.569: 63.7784% ( 149) 00:07:48.347 14619.569 - 14720.394: 65.5922% ( 166) 00:07:48.347 14720.394 - 14821.218: 67.2312% ( 150) 00:07:48.347 14821.218 - 14922.043: 68.7500% ( 139) 00:07:48.347 14922.043 - 15022.868: 70.3016% ( 142) 00:07:48.347 15022.868 - 15123.692: 71.7657% ( 134) 00:07:48.347 15123.692 - 15224.517: 73.1753% ( 129) 00:07:48.347 15224.517 - 15325.342: 74.7924% ( 148) 00:07:48.347 15325.342 - 15426.166: 76.2893% ( 137) 00:07:48.347 15426.166 - 15526.991: 78.0048% ( 157) 00:07:48.347 15526.991 - 15627.815: 79.7094% ( 156) 00:07:48.347 15627.815 - 15728.640: 81.4358% ( 158) 00:07:48.347 15728.640 - 15829.465: 83.2714% ( 168) 00:07:48.347 15829.465 - 15930.289: 84.9104% ( 150) 00:07:48.347 15930.289 - 16031.114: 86.4620% ( 142) 00:07:48.347 16031.114 - 16131.938: 88.0791% ( 148) 00:07:48.347 16131.938 - 16232.763: 89.4996% ( 130) 00:07:48.347 16232.763 - 16333.588: 90.8217% ( 121) 00:07:48.347 16333.588 - 16434.412: 91.7723% ( 87) 00:07:48.347 16434.412 - 16535.237: 92.7120% ( 86) 00:07:48.347 16535.237 - 16636.062: 93.6626% ( 87) 00:07:48.347 16636.062 - 16736.886: 94.5367% ( 80) 00:07:48.347 16736.886 - 16837.711: 95.1486% ( 56) 00:07:48.347 16837.711 - 16938.535: 95.5310% ( 35) 00:07:48.347 16938.535 - 17039.360: 95.9135% ( 35) 00:07:48.347 17039.360 - 17140.185: 96.1866% ( 25) 00:07:48.347 17140.185 - 17241.009: 96.4270% ( 22) 00:07:48.347 17241.009 - 17341.834: 96.6237% ( 18) 00:07:48.347 17341.834 - 17442.658: 96.7985% ( 16) 00:07:48.347 17442.658 - 17543.483: 96.9078% ( 10) 00:07:48.347 17543.483 - 17644.308: 97.0280% ( 11) 00:07:48.347 17644.308 - 17745.132: 97.1482% ( 11) 00:07:48.347 17745.132 - 17845.957: 97.3230% ( 16) 00:07:48.347 17845.957 - 17946.782: 97.5524% ( 21) 00:07:48.347 17946.782 - 18047.606: 97.6836% ( 12) 00:07:48.347 18047.606 - 18148.431: 97.8147% ( 12) 00:07:48.347 18148.431 - 18249.255: 97.9567% ( 13) 00:07:48.347 18249.255 - 18350.080: 98.0878% ( 12) 00:07:48.347 18350.080 - 18450.905: 98.2190% ( 12) 00:07:48.347 18450.905 - 18551.729: 98.3610% ( 13) 00:07:48.347 18551.729 - 18652.554: 98.4921% ( 12) 00:07:48.347 18652.554 - 18753.378: 98.5686% ( 7) 00:07:48.347 18753.378 - 18854.203: 98.6014% ( 3) 00:07:48.347 20064.098 - 20164.923: 98.6123% ( 1) 00:07:48.347 20164.923 - 20265.748: 98.6342% ( 2) 00:07:48.347 20265.748 - 20366.572: 98.6670% ( 3) 00:07:48.347 20366.572 - 20467.397: 98.7107% ( 4) 00:07:48.347 20467.397 - 20568.222: 98.7544% ( 4) 00:07:48.347 20568.222 - 20669.046: 98.8090% ( 5) 00:07:48.347 20669.046 - 20769.871: 98.8527% ( 4) 00:07:48.347 20769.871 - 20870.695: 98.9073% ( 5) 00:07:48.347 20870.695 - 20971.520: 98.9401% ( 3) 00:07:48.347 20971.520 - 21072.345: 98.9948% ( 5) 00:07:48.347 21072.345 - 21173.169: 99.0385% ( 4) 00:07:48.347 21173.169 - 21273.994: 99.0822% ( 4) 00:07:48.347 21273.994 - 21374.818: 99.1368% ( 5) 00:07:48.347 21374.818 - 21475.643: 99.1805% ( 4) 00:07:48.347 21475.643 - 21576.468: 99.2242% ( 4) 00:07:48.347 21576.468 - 21677.292: 99.2679% ( 4) 00:07:48.347 21677.292 - 21778.117: 99.3007% ( 3) 00:07:48.347 28230.892 - 28432.542: 99.3335% ( 3) 00:07:48.347 28432.542 - 28634.191: 99.4646% ( 12) 00:07:48.347 28634.191 - 28835.840: 99.6066% ( 13) 00:07:48.347 28835.840 - 29037.489: 99.7487% ( 13) 00:07:48.347 29037.489 - 29239.138: 99.8907% ( 13) 00:07:48.347 29239.138 - 29440.788: 100.0000% ( 10) 00:07:48.347 00:07:48.347 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.347 ============================================================================== 00:07:48.347 Range in us Cumulative IO count 00:07:48.347 4335.458 - 4360.665: 0.0328% ( 3) 00:07:48.347 4360.665 - 4385.871: 0.0437% ( 1) 00:07:48.347 4385.871 - 4411.077: 0.0656% ( 2) 00:07:48.347 4411.077 - 4436.283: 0.0874% ( 2) 00:07:48.347 4436.283 - 4461.489: 0.1093% ( 2) 00:07:48.347 4461.489 - 4486.695: 0.1311% ( 2) 00:07:48.347 4486.695 - 4511.902: 0.1639% ( 3) 00:07:48.347 4511.902 - 4537.108: 0.1748% ( 1) 00:07:48.347 4537.108 - 4562.314: 0.1967% ( 2) 00:07:48.347 4562.314 - 4587.520: 0.2076% ( 1) 00:07:48.347 4587.520 - 4612.726: 0.2185% ( 1) 00:07:48.347 4612.726 - 4637.932: 0.2295% ( 1) 00:07:48.347 4637.932 - 4663.138: 0.2404% ( 1) 00:07:48.347 4663.138 - 4688.345: 0.2513% ( 1) 00:07:48.347 4688.345 - 4713.551: 0.2622% ( 1) 00:07:48.347 4763.963 - 4789.169: 0.2732% ( 1) 00:07:48.347 4789.169 - 4814.375: 0.2950% ( 2) 00:07:48.347 4814.375 - 4839.582: 0.3059% ( 1) 00:07:48.347 4839.582 - 4864.788: 0.3169% ( 1) 00:07:48.347 4864.788 - 4889.994: 0.3278% ( 1) 00:07:48.347 4889.994 - 4915.200: 0.3387% ( 1) 00:07:48.347 4915.200 - 4940.406: 0.3497% ( 1) 00:07:48.347 4940.406 - 4965.612: 0.3715% ( 2) 00:07:48.347 4965.612 - 4990.818: 0.3824% ( 1) 00:07:48.347 4990.818 - 5016.025: 0.3934% ( 1) 00:07:48.347 5016.025 - 5041.231: 0.4043% ( 1) 00:07:48.347 5041.231 - 5066.437: 0.4152% ( 1) 00:07:48.347 5066.437 - 5091.643: 0.4261% ( 1) 00:07:48.347 5091.643 - 5116.849: 0.4371% ( 1) 00:07:48.347 5116.849 - 5142.055: 0.4480% ( 1) 00:07:48.347 5142.055 - 5167.262: 0.4589% ( 1) 00:07:48.347 5167.262 - 5192.468: 0.4808% ( 2) 00:07:48.347 5192.468 - 5217.674: 0.4917% ( 1) 00:07:48.347 5217.674 - 5242.880: 0.5026% ( 1) 00:07:48.347 5242.880 - 5268.086: 0.5135% ( 1) 00:07:48.347 5268.086 - 5293.292: 0.5245% ( 1) 00:07:48.347 5293.292 - 5318.498: 0.5354% ( 1) 00:07:48.347 5318.498 - 5343.705: 0.5463% ( 1) 00:07:48.347 5343.705 - 5368.911: 0.5573% ( 1) 00:07:48.347 5368.911 - 5394.117: 0.5682% ( 1) 00:07:48.347 5394.117 - 5419.323: 0.5791% ( 1) 00:07:48.347 5419.323 - 5444.529: 0.6010% ( 2) 00:07:48.347 5444.529 - 5469.735: 0.6119% ( 1) 00:07:48.347 5469.735 - 5494.942: 0.6228% ( 1) 00:07:48.347 5494.942 - 5520.148: 0.6337% ( 1) 00:07:48.347 5520.148 - 5545.354: 0.6447% ( 1) 00:07:48.347 5545.354 - 5570.560: 0.6556% ( 1) 00:07:48.347 5570.560 - 5595.766: 0.6665% ( 1) 00:07:48.347 5595.766 - 5620.972: 0.6774% ( 1) 00:07:48.347 5620.972 - 5646.178: 0.6993% ( 2) 00:07:48.347 10183.286 - 10233.698: 0.7430% ( 4) 00:07:48.347 10233.698 - 10284.111: 0.8960% ( 14) 00:07:48.347 10284.111 - 10334.523: 1.0817% ( 17) 00:07:48.347 10334.523 - 10384.935: 1.1691% ( 8) 00:07:48.347 10384.935 - 10435.348: 1.2347% ( 6) 00:07:48.347 10435.348 - 10485.760: 1.3767% ( 13) 00:07:48.347 10485.760 - 10536.172: 1.6390% ( 24) 00:07:48.347 10536.172 - 10586.585: 1.8684% ( 21) 00:07:48.347 10586.585 - 10636.997: 2.0870% ( 20) 00:07:48.347 10636.997 - 10687.409: 2.3164% ( 21) 00:07:48.347 10687.409 - 10737.822: 2.5677% ( 23) 00:07:48.347 10737.822 - 10788.234: 2.9174% ( 32) 00:07:48.347 10788.234 - 10838.646: 3.3108% ( 36) 00:07:48.347 10838.646 - 10889.058: 3.6713% ( 33) 00:07:48.347 10889.058 - 10939.471: 4.0538% ( 35) 00:07:48.347 10939.471 - 10989.883: 4.4471% ( 36) 00:07:48.347 10989.883 - 11040.295: 4.9388% ( 45) 00:07:48.347 11040.295 - 11090.708: 5.4633% ( 48) 00:07:48.347 11090.708 - 11141.120: 5.9878% ( 48) 00:07:48.347 11141.120 - 11191.532: 6.6215% ( 58) 00:07:48.347 11191.532 - 11241.945: 7.2225% ( 55) 00:07:48.347 11241.945 - 11292.357: 7.8890% ( 61) 00:07:48.347 11292.357 - 11342.769: 8.5446% ( 60) 00:07:48.347 11342.769 - 11393.182: 9.1783% ( 58) 00:07:48.347 11393.182 - 11443.594: 9.8230% ( 59) 00:07:48.347 11443.594 - 11494.006: 10.4895% ( 61) 00:07:48.347 11494.006 - 11544.418: 11.0795% ( 54) 00:07:48.347 11544.418 - 11594.831: 11.7024% ( 57) 00:07:48.347 11594.831 - 11645.243: 12.1831% ( 44) 00:07:48.348 11645.243 - 11695.655: 12.7295% ( 50) 00:07:48.348 11695.655 - 11746.068: 13.2321% ( 46) 00:07:48.348 11746.068 - 11796.480: 13.7347% ( 46) 00:07:48.348 11796.480 - 11846.892: 14.2373% ( 46) 00:07:48.348 11846.892 - 11897.305: 14.7181% ( 44) 00:07:48.348 11897.305 - 11947.717: 15.2426% ( 48) 00:07:48.348 11947.717 - 11998.129: 15.8435% ( 55) 00:07:48.348 11998.129 - 12048.542: 16.3680% ( 48) 00:07:48.348 12048.542 - 12098.954: 16.8378% ( 43) 00:07:48.348 12098.954 - 12149.366: 17.3951% ( 51) 00:07:48.348 12149.366 - 12199.778: 18.0070% ( 56) 00:07:48.348 12199.778 - 12250.191: 18.6626% ( 60) 00:07:48.348 12250.191 - 12300.603: 19.2635% ( 55) 00:07:48.348 12300.603 - 12351.015: 20.0175% ( 69) 00:07:48.348 12351.015 - 12401.428: 20.8588% ( 77) 00:07:48.348 12401.428 - 12451.840: 21.5253% ( 61) 00:07:48.348 12451.840 - 12502.252: 22.2247% ( 64) 00:07:48.348 12502.252 - 12552.665: 22.9677% ( 68) 00:07:48.348 12552.665 - 12603.077: 23.8199% ( 78) 00:07:48.348 12603.077 - 12653.489: 24.8252% ( 92) 00:07:48.348 12653.489 - 12703.902: 25.9178% ( 100) 00:07:48.348 12703.902 - 12754.314: 27.1088% ( 109) 00:07:48.348 12754.314 - 12804.726: 28.1687% ( 97) 00:07:48.348 12804.726 - 12855.138: 29.3706% ( 110) 00:07:48.348 12855.138 - 12905.551: 30.5179% ( 105) 00:07:48.348 12905.551 - 13006.375: 32.7906% ( 208) 00:07:48.348 13006.375 - 13107.200: 34.8995% ( 193) 00:07:48.348 13107.200 - 13208.025: 36.9646% ( 189) 00:07:48.348 13208.025 - 13308.849: 39.0844% ( 194) 00:07:48.348 13308.849 - 13409.674: 41.3243% ( 205) 00:07:48.348 13409.674 - 13510.498: 43.4550% ( 195) 00:07:48.348 13510.498 - 13611.323: 45.4545% ( 183) 00:07:48.348 13611.323 - 13712.148: 47.4104% ( 179) 00:07:48.348 13712.148 - 13812.972: 49.1696% ( 161) 00:07:48.348 13812.972 - 13913.797: 51.0271% ( 170) 00:07:48.348 13913.797 - 14014.622: 52.9283% ( 174) 00:07:48.348 14014.622 - 14115.446: 55.0699% ( 196) 00:07:48.348 14115.446 - 14216.271: 57.3427% ( 208) 00:07:48.348 14216.271 - 14317.095: 59.6154% ( 208) 00:07:48.348 14317.095 - 14417.920: 61.7024% ( 191) 00:07:48.348 14417.920 - 14518.745: 63.5708% ( 171) 00:07:48.348 14518.745 - 14619.569: 65.5267% ( 179) 00:07:48.348 14619.569 - 14720.394: 67.2640% ( 159) 00:07:48.348 14720.394 - 14821.218: 68.9248% ( 152) 00:07:48.348 14821.218 - 14922.043: 70.3671% ( 132) 00:07:48.348 14922.043 - 15022.868: 71.7002% ( 122) 00:07:48.348 15022.868 - 15123.692: 72.9130% ( 111) 00:07:48.348 15123.692 - 15224.517: 74.3335% ( 130) 00:07:48.348 15224.517 - 15325.342: 75.5573% ( 112) 00:07:48.348 15325.342 - 15426.166: 76.7920% ( 113) 00:07:48.348 15426.166 - 15526.991: 78.3872% ( 146) 00:07:48.348 15526.991 - 15627.815: 79.9716% ( 145) 00:07:48.348 15627.815 - 15728.640: 81.6215% ( 151) 00:07:48.348 15728.640 - 15829.465: 83.0857% ( 134) 00:07:48.348 15829.465 - 15930.289: 84.6919% ( 147) 00:07:48.348 15930.289 - 16031.114: 86.2107% ( 139) 00:07:48.348 16031.114 - 16131.938: 87.5437% ( 122) 00:07:48.348 16131.938 - 16232.763: 88.8221% ( 117) 00:07:48.348 16232.763 - 16333.588: 89.9585% ( 104) 00:07:48.348 16333.588 - 16434.412: 91.0184% ( 97) 00:07:48.348 16434.412 - 16535.237: 91.8816% ( 79) 00:07:48.348 16535.237 - 16636.062: 92.6027% ( 66) 00:07:48.348 16636.062 - 16736.886: 93.2911% ( 63) 00:07:48.348 16736.886 - 16837.711: 93.9576% ( 61) 00:07:48.348 16837.711 - 16938.535: 94.5476% ( 54) 00:07:48.348 16938.535 - 17039.360: 95.1705% ( 57) 00:07:48.348 17039.360 - 17140.185: 95.7386% ( 52) 00:07:48.348 17140.185 - 17241.009: 96.2740% ( 49) 00:07:48.348 17241.009 - 17341.834: 96.7985% ( 48) 00:07:48.348 17341.834 - 17442.658: 97.2247% ( 39) 00:07:48.348 17442.658 - 17543.483: 97.4978% ( 25) 00:07:48.348 17543.483 - 17644.308: 97.6180% ( 11) 00:07:48.348 17644.308 - 17745.132: 97.7601% ( 13) 00:07:48.348 17745.132 - 17845.957: 97.8912% ( 12) 00:07:48.348 17845.957 - 17946.782: 98.0223% ( 12) 00:07:48.348 17946.782 - 18047.606: 98.1534% ( 12) 00:07:48.348 18047.606 - 18148.431: 98.2408% ( 8) 00:07:48.348 18148.431 - 18249.255: 98.3173% ( 7) 00:07:48.348 18249.255 - 18350.080: 98.3829% ( 6) 00:07:48.348 18350.080 - 18450.905: 98.4484% ( 6) 00:07:48.348 18450.905 - 18551.729: 98.5140% ( 6) 00:07:48.348 18551.729 - 18652.554: 98.5795% ( 6) 00:07:48.348 18652.554 - 18753.378: 98.6014% ( 2) 00:07:48.348 20064.098 - 20164.923: 98.6233% ( 2) 00:07:48.348 20164.923 - 20265.748: 98.6670% ( 4) 00:07:48.348 20265.748 - 20366.572: 98.7216% ( 5) 00:07:48.348 20366.572 - 20467.397: 98.7653% ( 4) 00:07:48.348 20467.397 - 20568.222: 98.8090% ( 4) 00:07:48.348 20568.222 - 20669.046: 98.8636% ( 5) 00:07:48.348 20669.046 - 20769.871: 98.9073% ( 4) 00:07:48.348 20769.871 - 20870.695: 98.9510% ( 4) 00:07:48.348 20870.695 - 20971.520: 98.9948% ( 4) 00:07:48.348 20971.520 - 21072.345: 99.0385% ( 4) 00:07:48.348 21072.345 - 21173.169: 99.0931% ( 5) 00:07:48.348 21173.169 - 21273.994: 99.1368% ( 4) 00:07:48.348 21273.994 - 21374.818: 99.1805% ( 4) 00:07:48.348 21374.818 - 21475.643: 99.2351% ( 5) 00:07:48.348 21475.643 - 21576.468: 99.2788% ( 4) 00:07:48.348 21576.468 - 21677.292: 99.3007% ( 2) 00:07:48.348 28432.542 - 28634.191: 99.3663% ( 6) 00:07:48.348 28634.191 - 28835.840: 99.5192% ( 14) 00:07:48.348 28835.840 - 29037.489: 99.6503% ( 12) 00:07:48.348 29037.489 - 29239.138: 99.7924% ( 13) 00:07:48.348 29239.138 - 29440.788: 99.9344% ( 13) 00:07:48.348 29440.788 - 29642.437: 100.0000% ( 6) 00:07:48.348 00:07:48.348 18:21:07 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:49.286 Initializing NVMe Controllers 00:07:49.286 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:49.286 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.286 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:49.286 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:49.286 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:49.286 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:49.286 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:49.286 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:49.286 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:49.286 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:49.286 Initialization complete. Launching workers. 00:07:49.286 ======================================================== 00:07:49.286 Latency(us) 00:07:49.286 Device Information : IOPS MiB/s Average min max 00:07:49.286 PCIE (0000:00:11.0) NSID 1 from core 0: 9622.11 112.76 13313.03 9135.34 32084.63 00:07:49.286 PCIE (0000:00:13.0) NSID 1 from core 0: 9622.11 112.76 13306.74 9129.67 31545.46 00:07:49.286 PCIE (0000:00:10.0) NSID 1 from core 0: 9622.11 112.76 13295.52 8680.59 31301.38 00:07:49.286 PCIE (0000:00:12.0) NSID 1 from core 0: 9622.11 112.76 13284.49 7897.81 30693.28 00:07:49.286 PCIE (0000:00:12.0) NSID 2 from core 0: 9622.11 112.76 13274.30 6447.29 31347.29 00:07:49.286 PCIE (0000:00:12.0) NSID 3 from core 0: 9685.83 113.51 13176.79 5465.61 25661.99 00:07:49.286 ======================================================== 00:07:49.286 Total : 57796.35 677.30 13275.04 5465.61 32084.63 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9830.400us 00:07:49.286 10.00000% : 11090.708us 00:07:49.286 25.00000% : 11846.892us 00:07:49.286 50.00000% : 13006.375us 00:07:49.286 75.00000% : 14115.446us 00:07:49.286 90.00000% : 15728.640us 00:07:49.286 95.00000% : 17543.483us 00:07:49.286 98.00000% : 18955.028us 00:07:49.286 99.00000% : 22988.012us 00:07:49.286 99.50000% : 31053.982us 00:07:49.286 99.90000% : 32062.228us 00:07:49.286 99.99000% : 32263.877us 00:07:49.286 99.99900% : 32263.877us 00:07:49.286 99.99990% : 32263.877us 00:07:49.286 99.99999% : 32263.877us 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9779.988us 00:07:49.286 10.00000% : 11090.708us 00:07:49.286 25.00000% : 11897.305us 00:07:49.286 50.00000% : 13006.375us 00:07:49.286 75.00000% : 14216.271us 00:07:49.286 90.00000% : 15627.815us 00:07:49.286 95.00000% : 17442.658us 00:07:49.286 98.00000% : 18955.028us 00:07:49.286 99.00000% : 23189.662us 00:07:49.286 99.50000% : 30852.332us 00:07:49.286 99.90000% : 31457.280us 00:07:49.286 99.99000% : 31658.929us 00:07:49.286 99.99900% : 31658.929us 00:07:49.286 99.99990% : 31658.929us 00:07:49.286 99.99999% : 31658.929us 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9527.926us 00:07:49.286 10.00000% : 10989.883us 00:07:49.286 25.00000% : 11897.305us 00:07:49.286 50.00000% : 13006.375us 00:07:49.286 75.00000% : 14317.095us 00:07:49.286 90.00000% : 15627.815us 00:07:49.286 95.00000% : 17140.185us 00:07:49.286 98.00000% : 18854.203us 00:07:49.286 99.00000% : 23290.486us 00:07:49.286 99.50000% : 30247.385us 00:07:49.286 99.90000% : 31255.631us 00:07:49.286 99.99000% : 31457.280us 00:07:49.286 99.99900% : 31457.280us 00:07:49.286 99.99990% : 31457.280us 00:07:49.286 99.99999% : 31457.280us 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9578.338us 00:07:49.286 10.00000% : 10989.883us 00:07:49.286 25.00000% : 11998.129us 00:07:49.286 50.00000% : 13006.375us 00:07:49.286 75.00000% : 14216.271us 00:07:49.286 90.00000% : 15627.815us 00:07:49.286 95.00000% : 16938.535us 00:07:49.286 98.00000% : 18753.378us 00:07:49.286 99.00000% : 23290.486us 00:07:49.286 99.50000% : 29844.086us 00:07:49.286 99.90000% : 30650.683us 00:07:49.286 99.99000% : 30852.332us 00:07:49.286 99.99900% : 30852.332us 00:07:49.286 99.99990% : 30852.332us 00:07:49.286 99.99999% : 30852.332us 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9477.514us 00:07:49.286 10.00000% : 11040.295us 00:07:49.286 25.00000% : 11947.717us 00:07:49.286 50.00000% : 13006.375us 00:07:49.286 75.00000% : 14115.446us 00:07:49.286 90.00000% : 15829.465us 00:07:49.286 95.00000% : 16938.535us 00:07:49.286 98.00000% : 18854.203us 00:07:49.286 99.00000% : 24399.557us 00:07:49.286 99.50000% : 30650.683us 00:07:49.286 99.90000% : 31255.631us 00:07:49.286 99.99000% : 31457.280us 00:07:49.286 99.99900% : 31457.280us 00:07:49.286 99.99990% : 31457.280us 00:07:49.286 99.99999% : 31457.280us 00:07:49.286 00:07:49.286 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.286 ================================================================================= 00:07:49.286 1.00000% : 9427.102us 00:07:49.286 10.00000% : 11040.295us 00:07:49.286 25.00000% : 11846.892us 00:07:49.286 50.00000% : 12905.551us 00:07:49.286 75.00000% : 14216.271us 00:07:49.286 90.00000% : 15829.465us 00:07:49.286 95.00000% : 16938.535us 00:07:49.286 98.00000% : 18350.080us 00:07:49.287 99.00000% : 18753.378us 00:07:49.287 99.50000% : 24802.855us 00:07:49.287 99.90000% : 25508.628us 00:07:49.287 99.99000% : 25710.277us 00:07:49.287 99.99900% : 25710.277us 00:07:49.287 99.99990% : 25710.277us 00:07:49.287 99.99999% : 25710.277us 00:07:49.287 00:07:49.287 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.287 ============================================================================== 00:07:49.287 Range in us Cumulative IO count 00:07:49.287 9124.628 - 9175.040: 0.0103% ( 1) 00:07:49.287 9175.040 - 9225.452: 0.0207% ( 1) 00:07:49.287 9275.865 - 9326.277: 0.0414% ( 2) 00:07:49.287 9326.277 - 9376.689: 0.0828% ( 4) 00:07:49.287 9376.689 - 9427.102: 0.1449% ( 6) 00:07:49.287 9427.102 - 9477.514: 0.2276% ( 8) 00:07:49.287 9477.514 - 9527.926: 0.3932% ( 16) 00:07:49.287 9527.926 - 9578.338: 0.4863% ( 9) 00:07:49.287 9578.338 - 9628.751: 0.5484% ( 6) 00:07:49.287 9628.751 - 9679.163: 0.6726% ( 12) 00:07:49.287 9679.163 - 9729.575: 0.8071% ( 13) 00:07:49.287 9729.575 - 9779.988: 0.9520% ( 14) 00:07:49.287 9779.988 - 9830.400: 1.0555% ( 10) 00:07:49.287 9830.400 - 9880.812: 1.1382% ( 8) 00:07:49.287 9880.812 - 9931.225: 1.2314% ( 9) 00:07:49.287 9931.225 - 9981.637: 1.4487% ( 21) 00:07:49.287 9981.637 - 10032.049: 1.5728% ( 12) 00:07:49.287 10032.049 - 10082.462: 1.6349% ( 6) 00:07:49.287 10082.462 - 10132.874: 1.7695% ( 13) 00:07:49.287 10132.874 - 10183.286: 1.9557% ( 18) 00:07:49.287 10183.286 - 10233.698: 2.2144% ( 25) 00:07:49.287 10233.698 - 10284.111: 2.4938% ( 27) 00:07:49.287 10284.111 - 10334.523: 2.9801% ( 47) 00:07:49.287 10334.523 - 10384.935: 3.5700% ( 57) 00:07:49.287 10384.935 - 10435.348: 4.0149% ( 43) 00:07:49.287 10435.348 - 10485.760: 4.5219% ( 49) 00:07:49.287 10485.760 - 10536.172: 4.9151% ( 38) 00:07:49.287 10536.172 - 10586.585: 5.3187% ( 39) 00:07:49.287 10586.585 - 10636.997: 5.5360% ( 21) 00:07:49.287 10636.997 - 10687.409: 5.7637% ( 22) 00:07:49.287 10687.409 - 10737.822: 6.0948% ( 32) 00:07:49.287 10737.822 - 10788.234: 6.3949% ( 29) 00:07:49.287 10788.234 - 10838.646: 6.7363% ( 33) 00:07:49.287 10838.646 - 10889.058: 7.2848% ( 53) 00:07:49.287 10889.058 - 10939.471: 7.8228% ( 52) 00:07:49.287 10939.471 - 10989.883: 8.9404% ( 108) 00:07:49.287 10989.883 - 11040.295: 9.8613% ( 89) 00:07:49.287 11040.295 - 11090.708: 10.8030% ( 91) 00:07:49.287 11090.708 - 11141.120: 11.7032% ( 87) 00:07:49.287 11141.120 - 11191.532: 12.4276% ( 70) 00:07:49.287 11191.532 - 11241.945: 13.0691% ( 62) 00:07:49.287 11241.945 - 11292.357: 13.7728% ( 68) 00:07:49.287 11292.357 - 11342.769: 15.0455% ( 123) 00:07:49.287 11342.769 - 11393.182: 15.8630% ( 79) 00:07:49.287 11393.182 - 11443.594: 16.6080% ( 72) 00:07:49.287 11443.594 - 11494.006: 17.5807% ( 94) 00:07:49.287 11494.006 - 11544.418: 18.5120% ( 90) 00:07:49.287 11544.418 - 11594.831: 19.5157% ( 97) 00:07:49.287 11594.831 - 11645.243: 20.7678% ( 121) 00:07:49.287 11645.243 - 11695.655: 21.8129% ( 101) 00:07:49.287 11695.655 - 11746.068: 23.2202% ( 136) 00:07:49.287 11746.068 - 11796.480: 24.3274% ( 107) 00:07:49.287 11796.480 - 11846.892: 25.3622% ( 100) 00:07:49.287 11846.892 - 11897.305: 26.2417% ( 85) 00:07:49.287 11897.305 - 11947.717: 27.1730% ( 90) 00:07:49.287 11947.717 - 11998.129: 28.3009% ( 109) 00:07:49.287 11998.129 - 12048.542: 29.1598% ( 83) 00:07:49.287 12048.542 - 12098.954: 30.1531% ( 96) 00:07:49.287 12098.954 - 12149.366: 31.3638% ( 117) 00:07:49.287 12149.366 - 12199.778: 32.4193% ( 102) 00:07:49.287 12199.778 - 12250.191: 33.4437% ( 99) 00:07:49.287 12250.191 - 12300.603: 34.5613% ( 108) 00:07:49.287 12300.603 - 12351.015: 35.5546% ( 96) 00:07:49.287 12351.015 - 12401.428: 36.5687% ( 98) 00:07:49.287 12401.428 - 12451.840: 37.7483% ( 114) 00:07:49.287 12451.840 - 12502.252: 39.1970% ( 140) 00:07:49.287 12502.252 - 12552.665: 40.3353% ( 110) 00:07:49.287 12552.665 - 12603.077: 41.6494% ( 127) 00:07:49.287 12603.077 - 12653.489: 42.7670% ( 108) 00:07:49.287 12653.489 - 12703.902: 43.9776% ( 117) 00:07:49.287 12703.902 - 12754.314: 45.3125% ( 129) 00:07:49.287 12754.314 - 12804.726: 46.7301% ( 137) 00:07:49.287 12804.726 - 12855.138: 48.1478% ( 137) 00:07:49.287 12855.138 - 12905.551: 49.4205% ( 123) 00:07:49.287 12905.551 - 13006.375: 51.3659% ( 188) 00:07:49.287 13006.375 - 13107.200: 53.4354% ( 200) 00:07:49.287 13107.200 - 13208.025: 56.1776% ( 265) 00:07:49.287 13208.025 - 13308.849: 58.0919% ( 185) 00:07:49.287 13308.849 - 13409.674: 60.3994% ( 223) 00:07:49.287 13409.674 - 13510.498: 62.9863% ( 250) 00:07:49.287 13510.498 - 13611.323: 65.2007% ( 214) 00:07:49.287 13611.323 - 13712.148: 67.5911% ( 231) 00:07:49.287 13712.148 - 13812.972: 69.7434% ( 208) 00:07:49.287 13812.972 - 13913.797: 71.9992% ( 218) 00:07:49.287 13913.797 - 14014.622: 73.7169% ( 166) 00:07:49.287 14014.622 - 14115.446: 75.4346% ( 166) 00:07:49.287 14115.446 - 14216.271: 77.1316% ( 164) 00:07:49.287 14216.271 - 14317.095: 78.6010% ( 142) 00:07:49.287 14317.095 - 14417.920: 80.0186% ( 137) 00:07:49.287 14417.920 - 14518.745: 81.2500% ( 119) 00:07:49.287 14518.745 - 14619.569: 82.3055% ( 102) 00:07:49.287 14619.569 - 14720.394: 83.0195% ( 69) 00:07:49.287 14720.394 - 14821.218: 83.7438% ( 70) 00:07:49.287 14821.218 - 14922.043: 84.5406% ( 77) 00:07:49.287 14922.043 - 15022.868: 85.3166% ( 75) 00:07:49.287 15022.868 - 15123.692: 86.0410% ( 70) 00:07:49.287 15123.692 - 15224.517: 86.7964% ( 73) 00:07:49.287 15224.517 - 15325.342: 87.4276% ( 61) 00:07:49.287 15325.342 - 15426.166: 88.1002% ( 65) 00:07:49.287 15426.166 - 15526.991: 88.9073% ( 78) 00:07:49.287 15526.991 - 15627.815: 89.8282% ( 89) 00:07:49.287 15627.815 - 15728.640: 90.5526% ( 70) 00:07:49.287 15728.640 - 15829.465: 91.1941% ( 62) 00:07:49.287 15829.465 - 15930.289: 91.6391% ( 43) 00:07:49.287 15930.289 - 16031.114: 92.0840% ( 43) 00:07:49.287 16031.114 - 16131.938: 92.4462% ( 35) 00:07:49.287 16131.938 - 16232.763: 92.9843% ( 52) 00:07:49.287 16232.763 - 16333.588: 93.1602% ( 17) 00:07:49.287 16333.588 - 16434.412: 93.2326% ( 7) 00:07:49.287 16434.412 - 16535.237: 93.3050% ( 7) 00:07:49.287 16535.237 - 16636.062: 93.3464% ( 4) 00:07:49.287 16636.062 - 16736.886: 93.3775% ( 3) 00:07:49.287 16736.886 - 16837.711: 93.4396% ( 6) 00:07:49.287 16837.711 - 16938.535: 93.5534% ( 11) 00:07:49.287 16938.535 - 17039.360: 93.7914% ( 23) 00:07:49.287 17039.360 - 17140.185: 94.0087% ( 21) 00:07:49.287 17140.185 - 17241.009: 94.2881% ( 27) 00:07:49.287 17241.009 - 17341.834: 94.6502% ( 35) 00:07:49.287 17341.834 - 17442.658: 94.9296% ( 27) 00:07:49.287 17442.658 - 17543.483: 95.2090% ( 27) 00:07:49.287 17543.483 - 17644.308: 95.4781% ( 26) 00:07:49.287 17644.308 - 17745.132: 95.7264% ( 24) 00:07:49.287 17745.132 - 17845.957: 96.0368% ( 30) 00:07:49.287 17845.957 - 17946.782: 96.3162% ( 27) 00:07:49.287 17946.782 - 18047.606: 96.5335% ( 21) 00:07:49.287 18047.606 - 18148.431: 96.7819% ( 24) 00:07:49.287 18148.431 - 18249.255: 97.1130% ( 32) 00:07:49.287 18249.255 - 18350.080: 97.3406% ( 22) 00:07:49.287 18350.080 - 18450.905: 97.5269% ( 18) 00:07:49.287 18450.905 - 18551.729: 97.7339% ( 20) 00:07:49.287 18551.729 - 18652.554: 97.8373% ( 10) 00:07:49.287 18652.554 - 18753.378: 97.9098% ( 7) 00:07:49.287 18753.378 - 18854.203: 97.9719% ( 6) 00:07:49.287 18854.203 - 18955.028: 98.0443% ( 7) 00:07:49.287 18955.028 - 19055.852: 98.1167% ( 7) 00:07:49.287 19055.852 - 19156.677: 98.1685% ( 5) 00:07:49.287 19156.677 - 19257.502: 98.2099% ( 4) 00:07:49.287 19257.502 - 19358.326: 98.2616% ( 5) 00:07:49.287 19358.326 - 19459.151: 98.3030% ( 4) 00:07:49.287 19459.151 - 19559.975: 98.3547% ( 5) 00:07:49.287 19559.975 - 19660.800: 98.4065% ( 5) 00:07:49.287 19660.800 - 19761.625: 98.4582% ( 5) 00:07:49.287 19761.625 - 19862.449: 98.4996% ( 4) 00:07:49.287 19862.449 - 19963.274: 98.5513% ( 5) 00:07:49.287 19963.274 - 20064.098: 98.5927% ( 4) 00:07:49.287 20064.098 - 20164.923: 98.6341% ( 4) 00:07:49.287 20164.923 - 20265.748: 98.6755% ( 4) 00:07:49.287 22181.415 - 22282.240: 98.6962% ( 2) 00:07:49.287 22282.240 - 22383.065: 98.7169% ( 2) 00:07:49.287 22383.065 - 22483.889: 98.7583% ( 4) 00:07:49.287 22483.889 - 22584.714: 98.7997% ( 4) 00:07:49.287 22584.714 - 22685.538: 98.8514% ( 5) 00:07:49.287 22685.538 - 22786.363: 98.9135% ( 6) 00:07:49.287 22786.363 - 22887.188: 98.9756% ( 6) 00:07:49.287 22887.188 - 22988.012: 99.0170% ( 4) 00:07:49.287 22988.012 - 23088.837: 99.0687% ( 5) 00:07:49.287 23088.837 - 23189.662: 99.0998% ( 3) 00:07:49.287 23189.662 - 23290.486: 99.1411% ( 4) 00:07:49.287 23290.486 - 23391.311: 99.1618% ( 2) 00:07:49.287 23391.311 - 23492.135: 99.1929% ( 3) 00:07:49.287 23492.135 - 23592.960: 99.2239% ( 3) 00:07:49.287 23592.960 - 23693.785: 99.2550% ( 3) 00:07:49.287 23693.785 - 23794.609: 99.2964% ( 4) 00:07:49.287 23794.609 - 23895.434: 99.3377% ( 4) 00:07:49.287 30247.385 - 30449.034: 99.3584% ( 2) 00:07:49.287 30449.034 - 30650.683: 99.4309% ( 7) 00:07:49.287 30852.332 - 31053.982: 99.5550% ( 12) 00:07:49.287 31053.982 - 31255.631: 99.6792% ( 12) 00:07:49.287 31255.631 - 31457.280: 99.7103% ( 3) 00:07:49.287 31457.280 - 31658.929: 99.7724% ( 6) 00:07:49.287 31658.929 - 31860.578: 99.8758% ( 10) 00:07:49.287 31860.578 - 32062.228: 99.9897% ( 11) 00:07:49.287 32062.228 - 32263.877: 100.0000% ( 1) 00:07:49.287 00:07:49.287 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.287 ============================================================================== 00:07:49.287 Range in us Cumulative IO count 00:07:49.287 9124.628 - 9175.040: 0.0517% ( 5) 00:07:49.287 9175.040 - 9225.452: 0.1035% ( 5) 00:07:49.287 9225.452 - 9275.865: 0.1966% ( 9) 00:07:49.288 9275.865 - 9326.277: 0.3829% ( 18) 00:07:49.288 9326.277 - 9376.689: 0.4863% ( 10) 00:07:49.288 9376.689 - 9427.102: 0.5381% ( 5) 00:07:49.288 9427.102 - 9477.514: 0.5691% ( 3) 00:07:49.288 9477.514 - 9527.926: 0.6105% ( 4) 00:07:49.288 9527.926 - 9578.338: 0.7036% ( 9) 00:07:49.288 9578.338 - 9628.751: 0.7864% ( 8) 00:07:49.288 9628.751 - 9679.163: 0.8278% ( 4) 00:07:49.288 9679.163 - 9729.575: 0.8899% ( 6) 00:07:49.288 9729.575 - 9779.988: 1.1072% ( 21) 00:07:49.288 9779.988 - 9830.400: 1.3038% ( 19) 00:07:49.288 9830.400 - 9880.812: 1.5211% ( 21) 00:07:49.288 9880.812 - 9931.225: 1.7281% ( 20) 00:07:49.288 9931.225 - 9981.637: 1.9143% ( 18) 00:07:49.288 9981.637 - 10032.049: 2.2765% ( 35) 00:07:49.288 10032.049 - 10082.462: 2.6076% ( 32) 00:07:49.288 10082.462 - 10132.874: 2.8560% ( 24) 00:07:49.288 10132.874 - 10183.286: 3.2699% ( 40) 00:07:49.288 10183.286 - 10233.698: 3.5700% ( 29) 00:07:49.288 10233.698 - 10284.111: 3.9114% ( 33) 00:07:49.288 10284.111 - 10334.523: 4.3564% ( 43) 00:07:49.288 10334.523 - 10384.935: 4.7185% ( 35) 00:07:49.288 10384.935 - 10435.348: 5.2152% ( 48) 00:07:49.288 10435.348 - 10485.760: 5.6291% ( 40) 00:07:49.288 10485.760 - 10536.172: 5.9810% ( 34) 00:07:49.288 10536.172 - 10586.585: 6.3949% ( 40) 00:07:49.288 10586.585 - 10636.997: 6.8295% ( 42) 00:07:49.288 10636.997 - 10687.409: 7.1296% ( 29) 00:07:49.288 10687.409 - 10737.822: 7.6469% ( 50) 00:07:49.288 10737.822 - 10788.234: 7.9367% ( 28) 00:07:49.288 10788.234 - 10838.646: 8.4230% ( 47) 00:07:49.288 10838.646 - 10889.058: 8.6921% ( 26) 00:07:49.288 10889.058 - 10939.471: 8.9611% ( 26) 00:07:49.288 10939.471 - 10989.883: 9.3233% ( 35) 00:07:49.288 10989.883 - 11040.295: 9.9338% ( 59) 00:07:49.288 11040.295 - 11090.708: 10.4822% ( 53) 00:07:49.288 11090.708 - 11141.120: 10.9789% ( 48) 00:07:49.288 11141.120 - 11191.532: 11.7964% ( 79) 00:07:49.288 11191.532 - 11241.945: 12.4897% ( 67) 00:07:49.288 11241.945 - 11292.357: 13.4313% ( 91) 00:07:49.288 11292.357 - 11342.769: 14.1660% ( 71) 00:07:49.288 11342.769 - 11393.182: 15.3042% ( 110) 00:07:49.288 11393.182 - 11443.594: 16.2148% ( 88) 00:07:49.288 11443.594 - 11494.006: 17.0840% ( 84) 00:07:49.288 11494.006 - 11544.418: 18.1188% ( 100) 00:07:49.288 11544.418 - 11594.831: 19.0604% ( 91) 00:07:49.288 11594.831 - 11645.243: 20.0849% ( 99) 00:07:49.288 11645.243 - 11695.655: 21.1403% ( 102) 00:07:49.288 11695.655 - 11746.068: 22.1647% ( 99) 00:07:49.288 11746.068 - 11796.480: 23.0857% ( 89) 00:07:49.288 11796.480 - 11846.892: 24.2136% ( 109) 00:07:49.288 11846.892 - 11897.305: 25.3829% ( 113) 00:07:49.288 11897.305 - 11947.717: 26.5832% ( 116) 00:07:49.288 11947.717 - 11998.129: 27.5559% ( 94) 00:07:49.288 11998.129 - 12048.542: 28.7562% ( 116) 00:07:49.288 12048.542 - 12098.954: 30.1738% ( 137) 00:07:49.288 12098.954 - 12149.366: 31.8502% ( 162) 00:07:49.288 12149.366 - 12199.778: 33.2885% ( 139) 00:07:49.288 12199.778 - 12250.191: 34.3750% ( 105) 00:07:49.288 12250.191 - 12300.603: 35.6374% ( 122) 00:07:49.288 12300.603 - 12351.015: 36.8067% ( 113) 00:07:49.288 12351.015 - 12401.428: 38.0070% ( 116) 00:07:49.288 12401.428 - 12451.840: 39.1349% ( 109) 00:07:49.288 12451.840 - 12502.252: 40.0662% ( 90) 00:07:49.288 12502.252 - 12552.665: 40.9872% ( 89) 00:07:49.288 12552.665 - 12603.077: 41.9185% ( 90) 00:07:49.288 12603.077 - 12653.489: 42.9739% ( 102) 00:07:49.288 12653.489 - 12703.902: 43.9363% ( 93) 00:07:49.288 12703.902 - 12754.314: 44.8882% ( 92) 00:07:49.288 12754.314 - 12804.726: 46.0058% ( 108) 00:07:49.288 12804.726 - 12855.138: 47.3406% ( 129) 00:07:49.288 12855.138 - 12905.551: 48.4272% ( 105) 00:07:49.288 12905.551 - 13006.375: 50.7243% ( 222) 00:07:49.288 13006.375 - 13107.200: 53.4044% ( 259) 00:07:49.288 13107.200 - 13208.025: 55.5050% ( 203) 00:07:49.288 13208.025 - 13308.849: 57.6366% ( 206) 00:07:49.288 13308.849 - 13409.674: 59.9752% ( 226) 00:07:49.288 13409.674 - 13510.498: 62.5828% ( 252) 00:07:49.288 13510.498 - 13611.323: 64.6627% ( 201) 00:07:49.288 13611.323 - 13712.148: 66.4632% ( 174) 00:07:49.288 13712.148 - 13812.972: 68.5327% ( 200) 00:07:49.288 13812.972 - 13913.797: 70.0952% ( 151) 00:07:49.288 13913.797 - 14014.622: 71.3680% ( 123) 00:07:49.288 14014.622 - 14115.446: 73.0029% ( 158) 00:07:49.288 14115.446 - 14216.271: 75.1345% ( 206) 00:07:49.288 14216.271 - 14317.095: 77.2041% ( 200) 00:07:49.288 14317.095 - 14417.920: 78.6527% ( 140) 00:07:49.288 14417.920 - 14518.745: 80.0600% ( 136) 00:07:49.288 14518.745 - 14619.569: 81.4466% ( 134) 00:07:49.288 14619.569 - 14720.394: 82.9470% ( 145) 00:07:49.288 14720.394 - 14821.218: 84.2819% ( 129) 00:07:49.288 14821.218 - 14922.043: 85.4822% ( 116) 00:07:49.288 14922.043 - 15022.868: 86.5687% ( 105) 00:07:49.288 15022.868 - 15123.692: 87.3862% ( 79) 00:07:49.288 15123.692 - 15224.517: 88.1105% ( 70) 00:07:49.288 15224.517 - 15325.342: 88.7624% ( 63) 00:07:49.288 15325.342 - 15426.166: 89.3522% ( 57) 00:07:49.288 15426.166 - 15526.991: 89.7661% ( 40) 00:07:49.288 15526.991 - 15627.815: 90.2421% ( 46) 00:07:49.288 15627.815 - 15728.640: 90.7388% ( 48) 00:07:49.288 15728.640 - 15829.465: 91.1838% ( 43) 00:07:49.288 15829.465 - 15930.289: 91.5977% ( 40) 00:07:49.288 15930.289 - 16031.114: 92.0633% ( 45) 00:07:49.288 16031.114 - 16131.938: 92.4462% ( 37) 00:07:49.288 16131.938 - 16232.763: 92.7359% ( 28) 00:07:49.288 16232.763 - 16333.588: 92.9636% ( 22) 00:07:49.288 16333.588 - 16434.412: 93.1291% ( 16) 00:07:49.288 16434.412 - 16535.237: 93.2637% ( 13) 00:07:49.288 16535.237 - 16636.062: 93.4085% ( 14) 00:07:49.288 16636.062 - 16736.886: 93.4603% ( 5) 00:07:49.288 16736.886 - 16837.711: 93.5327% ( 7) 00:07:49.288 16837.711 - 16938.535: 93.6465% ( 11) 00:07:49.288 16938.535 - 17039.360: 93.9156% ( 26) 00:07:49.288 17039.360 - 17140.185: 94.1743% ( 25) 00:07:49.288 17140.185 - 17241.009: 94.4847% ( 30) 00:07:49.288 17241.009 - 17341.834: 94.8158% ( 32) 00:07:49.288 17341.834 - 17442.658: 95.1469% ( 32) 00:07:49.288 17442.658 - 17543.483: 95.4677% ( 31) 00:07:49.288 17543.483 - 17644.308: 95.7988% ( 32) 00:07:49.288 17644.308 - 17745.132: 96.1403% ( 33) 00:07:49.288 17745.132 - 17845.957: 96.3990% ( 25) 00:07:49.288 17845.957 - 17946.782: 96.6680% ( 26) 00:07:49.288 17946.782 - 18047.606: 96.9474% ( 27) 00:07:49.288 18047.606 - 18148.431: 97.2165% ( 26) 00:07:49.288 18148.431 - 18249.255: 97.4338% ( 21) 00:07:49.288 18249.255 - 18350.080: 97.5786% ( 14) 00:07:49.288 18350.080 - 18450.905: 97.7132% ( 13) 00:07:49.288 18450.905 - 18551.729: 97.7959% ( 8) 00:07:49.288 18551.729 - 18652.554: 97.8477% ( 5) 00:07:49.288 18652.554 - 18753.378: 97.8994% ( 5) 00:07:49.288 18753.378 - 18854.203: 97.9512% ( 5) 00:07:49.288 18854.203 - 18955.028: 98.0029% ( 5) 00:07:49.288 18955.028 - 19055.852: 98.0546% ( 5) 00:07:49.288 19055.852 - 19156.677: 98.1374% ( 8) 00:07:49.288 19156.677 - 19257.502: 98.1788% ( 4) 00:07:49.288 19257.502 - 19358.326: 98.2202% ( 4) 00:07:49.288 19358.326 - 19459.151: 98.2616% ( 4) 00:07:49.288 19459.151 - 19559.975: 98.3133% ( 5) 00:07:49.288 19559.975 - 19660.800: 98.3651% ( 5) 00:07:49.288 19660.800 - 19761.625: 98.4065% ( 4) 00:07:49.288 19761.625 - 19862.449: 98.4582% ( 5) 00:07:49.288 19862.449 - 19963.274: 98.4996% ( 4) 00:07:49.288 19963.274 - 20064.098: 98.5410% ( 4) 00:07:49.288 20064.098 - 20164.923: 98.5927% ( 5) 00:07:49.288 20164.923 - 20265.748: 98.6341% ( 4) 00:07:49.288 20265.748 - 20366.572: 98.6755% ( 4) 00:07:49.288 22181.415 - 22282.240: 98.6858% ( 1) 00:07:49.288 22282.240 - 22383.065: 98.7272% ( 4) 00:07:49.288 22383.065 - 22483.889: 98.7686% ( 4) 00:07:49.288 22483.889 - 22584.714: 98.7997% ( 3) 00:07:49.288 22584.714 - 22685.538: 98.8411% ( 4) 00:07:49.288 22685.538 - 22786.363: 98.8825% ( 4) 00:07:49.288 22786.363 - 22887.188: 98.9238% ( 4) 00:07:49.288 22887.188 - 22988.012: 98.9652% ( 4) 00:07:49.288 22988.012 - 23088.837: 98.9963% ( 3) 00:07:49.288 23088.837 - 23189.662: 99.0377% ( 4) 00:07:49.288 23189.662 - 23290.486: 99.0687% ( 3) 00:07:49.288 23290.486 - 23391.311: 99.1101% ( 4) 00:07:49.288 23391.311 - 23492.135: 99.1515% ( 4) 00:07:49.288 23492.135 - 23592.960: 99.1929% ( 4) 00:07:49.288 23592.960 - 23693.785: 99.2239% ( 3) 00:07:49.288 23693.785 - 23794.609: 99.2653% ( 4) 00:07:49.288 23794.609 - 23895.434: 99.3067% ( 4) 00:07:49.288 23895.434 - 23996.258: 99.3377% ( 3) 00:07:49.288 30247.385 - 30449.034: 99.3791% ( 4) 00:07:49.288 30449.034 - 30650.683: 99.4826% ( 10) 00:07:49.288 30650.683 - 30852.332: 99.5964% ( 11) 00:07:49.288 30852.332 - 31053.982: 99.7103% ( 11) 00:07:49.288 31053.982 - 31255.631: 99.8344% ( 12) 00:07:49.288 31255.631 - 31457.280: 99.9483% ( 11) 00:07:49.288 31457.280 - 31658.929: 100.0000% ( 5) 00:07:49.288 00:07:49.288 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.288 ============================================================================== 00:07:49.288 Range in us Cumulative IO count 00:07:49.288 8670.917 - 8721.329: 0.0414% ( 4) 00:07:49.288 8721.329 - 8771.742: 0.0931% ( 5) 00:07:49.288 8771.742 - 8822.154: 0.1966% ( 10) 00:07:49.288 8822.154 - 8872.566: 0.2587% ( 6) 00:07:49.288 8872.566 - 8922.978: 0.3208% ( 6) 00:07:49.288 8922.978 - 8973.391: 0.3725% ( 5) 00:07:49.288 8973.391 - 9023.803: 0.3932% ( 2) 00:07:49.288 9023.803 - 9074.215: 0.4346% ( 4) 00:07:49.288 9074.215 - 9124.628: 0.5070% ( 7) 00:07:49.288 9124.628 - 9175.040: 0.6105% ( 10) 00:07:49.288 9175.040 - 9225.452: 0.7140% ( 10) 00:07:49.288 9225.452 - 9275.865: 0.8175% ( 10) 00:07:49.288 9275.865 - 9326.277: 0.8589% ( 4) 00:07:49.288 9326.277 - 9376.689: 0.9106% ( 5) 00:07:49.288 9376.689 - 9427.102: 0.9313% ( 2) 00:07:49.289 9427.102 - 9477.514: 0.9830% ( 5) 00:07:49.289 9477.514 - 9527.926: 1.0658% ( 8) 00:07:49.289 9527.926 - 9578.338: 1.1072% ( 4) 00:07:49.289 9578.338 - 9628.751: 1.1486% ( 4) 00:07:49.289 9628.751 - 9679.163: 1.1796% ( 3) 00:07:49.289 9679.163 - 9729.575: 1.2417% ( 6) 00:07:49.289 9729.575 - 9779.988: 1.5108% ( 26) 00:07:49.289 9779.988 - 9830.400: 1.6556% ( 14) 00:07:49.289 9830.400 - 9880.812: 1.7384% ( 8) 00:07:49.289 9880.812 - 9931.225: 1.8315% ( 9) 00:07:49.289 9931.225 - 9981.637: 1.9557% ( 12) 00:07:49.289 9981.637 - 10032.049: 2.1316% ( 17) 00:07:49.289 10032.049 - 10082.462: 2.3800% ( 24) 00:07:49.289 10082.462 - 10132.874: 2.7939% ( 40) 00:07:49.289 10132.874 - 10183.286: 3.0526% ( 25) 00:07:49.289 10183.286 - 10233.698: 3.1974% ( 14) 00:07:49.289 10233.698 - 10284.111: 3.4665% ( 26) 00:07:49.289 10284.111 - 10334.523: 3.8597% ( 38) 00:07:49.289 10334.523 - 10384.935: 4.2632% ( 39) 00:07:49.289 10384.935 - 10435.348: 4.5840% ( 31) 00:07:49.289 10435.348 - 10485.760: 4.9048% ( 31) 00:07:49.289 10485.760 - 10536.172: 5.4118% ( 49) 00:07:49.289 10536.172 - 10586.585: 5.8464% ( 42) 00:07:49.289 10586.585 - 10636.997: 6.2914% ( 43) 00:07:49.289 10636.997 - 10687.409: 6.6536% ( 35) 00:07:49.289 10687.409 - 10737.822: 7.0571% ( 39) 00:07:49.289 10737.822 - 10788.234: 7.4503% ( 38) 00:07:49.289 10788.234 - 10838.646: 7.9988% ( 53) 00:07:49.289 10838.646 - 10889.058: 8.5472% ( 53) 00:07:49.289 10889.058 - 10939.471: 9.3336% ( 76) 00:07:49.289 10939.471 - 10989.883: 10.0166% ( 66) 00:07:49.289 10989.883 - 11040.295: 10.7616% ( 72) 00:07:49.289 11040.295 - 11090.708: 11.5894% ( 80) 00:07:49.289 11090.708 - 11141.120: 12.4793% ( 86) 00:07:49.289 11141.120 - 11191.532: 13.1726% ( 67) 00:07:49.289 11191.532 - 11241.945: 13.9590% ( 76) 00:07:49.289 11241.945 - 11292.357: 14.6420% ( 66) 00:07:49.289 11292.357 - 11342.769: 15.2732% ( 61) 00:07:49.289 11342.769 - 11393.182: 15.9354% ( 64) 00:07:49.289 11393.182 - 11443.594: 16.5149% ( 56) 00:07:49.289 11443.594 - 11494.006: 17.2599% ( 72) 00:07:49.289 11494.006 - 11544.418: 18.0360% ( 75) 00:07:49.289 11544.418 - 11594.831: 18.8949% ( 83) 00:07:49.289 11594.831 - 11645.243: 19.6709% ( 75) 00:07:49.289 11645.243 - 11695.655: 20.5608% ( 86) 00:07:49.289 11695.655 - 11746.068: 21.6474% ( 105) 00:07:49.289 11746.068 - 11796.480: 22.8477% ( 116) 00:07:49.289 11796.480 - 11846.892: 23.9342% ( 105) 00:07:49.289 11846.892 - 11897.305: 25.1242% ( 115) 00:07:49.289 11897.305 - 11947.717: 26.1693% ( 101) 00:07:49.289 11947.717 - 11998.129: 27.2558% ( 105) 00:07:49.289 11998.129 - 12048.542: 28.3320% ( 104) 00:07:49.289 12048.542 - 12098.954: 29.6151% ( 124) 00:07:49.289 12098.954 - 12149.366: 30.8878% ( 123) 00:07:49.289 12149.366 - 12199.778: 31.8812% ( 96) 00:07:49.289 12199.778 - 12250.191: 32.8642% ( 95) 00:07:49.289 12250.191 - 12300.603: 34.1163% ( 121) 00:07:49.289 12300.603 - 12351.015: 35.2649% ( 111) 00:07:49.289 12351.015 - 12401.428: 36.3514% ( 105) 00:07:49.289 12401.428 - 12451.840: 37.5207% ( 113) 00:07:49.289 12451.840 - 12502.252: 38.9487% ( 138) 00:07:49.289 12502.252 - 12552.665: 40.0145% ( 103) 00:07:49.289 12552.665 - 12603.077: 41.4632% ( 140) 00:07:49.289 12603.077 - 12653.489: 42.8084% ( 130) 00:07:49.289 12653.489 - 12703.902: 44.4329% ( 157) 00:07:49.289 12703.902 - 12754.314: 46.1300% ( 164) 00:07:49.289 12754.314 - 12804.726: 47.5373% ( 136) 00:07:49.289 12804.726 - 12855.138: 48.6755% ( 110) 00:07:49.289 12855.138 - 12905.551: 49.8241% ( 111) 00:07:49.289 12905.551 - 13006.375: 52.1834% ( 228) 00:07:49.289 13006.375 - 13107.200: 54.6151% ( 235) 00:07:49.289 13107.200 - 13208.025: 57.0364% ( 234) 00:07:49.289 13208.025 - 13308.849: 59.4681% ( 235) 00:07:49.289 13308.849 - 13409.674: 61.6722% ( 213) 00:07:49.289 13409.674 - 13510.498: 63.8969% ( 215) 00:07:49.289 13510.498 - 13611.323: 65.5629% ( 161) 00:07:49.289 13611.323 - 13712.148: 67.1151% ( 150) 00:07:49.289 13712.148 - 13812.972: 68.7293% ( 156) 00:07:49.289 13812.972 - 13913.797: 70.3849% ( 160) 00:07:49.289 13913.797 - 14014.622: 71.6680% ( 124) 00:07:49.289 14014.622 - 14115.446: 73.2202% ( 150) 00:07:49.289 14115.446 - 14216.271: 74.8448% ( 157) 00:07:49.289 14216.271 - 14317.095: 76.2935% ( 140) 00:07:49.289 14317.095 - 14417.920: 77.5766% ( 124) 00:07:49.289 14417.920 - 14518.745: 78.5906% ( 98) 00:07:49.289 14518.745 - 14619.569: 79.8945% ( 126) 00:07:49.289 14619.569 - 14720.394: 81.0741% ( 114) 00:07:49.289 14720.394 - 14821.218: 82.2330% ( 112) 00:07:49.289 14821.218 - 14922.043: 83.8576% ( 157) 00:07:49.289 14922.043 - 15022.868: 85.2856% ( 138) 00:07:49.289 15022.868 - 15123.692: 86.4135% ( 109) 00:07:49.289 15123.692 - 15224.517: 87.3344% ( 89) 00:07:49.289 15224.517 - 15325.342: 88.2450% ( 88) 00:07:49.289 15325.342 - 15426.166: 89.0522% ( 78) 00:07:49.289 15426.166 - 15526.991: 89.6730% ( 60) 00:07:49.289 15526.991 - 15627.815: 90.1904% ( 50) 00:07:49.289 15627.815 - 15728.640: 90.7699% ( 56) 00:07:49.289 15728.640 - 15829.465: 91.3183% ( 53) 00:07:49.289 15829.465 - 15930.289: 91.7219% ( 39) 00:07:49.289 15930.289 - 16031.114: 92.2082% ( 47) 00:07:49.289 16031.114 - 16131.938: 92.6635% ( 44) 00:07:49.289 16131.938 - 16232.763: 92.9222% ( 25) 00:07:49.289 16232.763 - 16333.588: 93.2430% ( 31) 00:07:49.289 16333.588 - 16434.412: 93.4706% ( 22) 00:07:49.289 16434.412 - 16535.237: 93.6569% ( 18) 00:07:49.289 16535.237 - 16636.062: 93.8845% ( 22) 00:07:49.289 16636.062 - 16736.886: 94.0501% ( 16) 00:07:49.289 16736.886 - 16837.711: 94.3812% ( 32) 00:07:49.289 16837.711 - 16938.535: 94.7020% ( 31) 00:07:49.289 16938.535 - 17039.360: 94.9503% ( 24) 00:07:49.289 17039.360 - 17140.185: 95.1262% ( 17) 00:07:49.289 17140.185 - 17241.009: 95.4056% ( 27) 00:07:49.289 17241.009 - 17341.834: 95.5712% ( 16) 00:07:49.289 17341.834 - 17442.658: 95.7885% ( 21) 00:07:49.289 17442.658 - 17543.483: 96.0472% ( 25) 00:07:49.289 17543.483 - 17644.308: 96.2438% ( 19) 00:07:49.289 17644.308 - 17745.132: 96.4921% ( 24) 00:07:49.289 17745.132 - 17845.957: 96.7094% ( 21) 00:07:49.289 17845.957 - 17946.782: 96.9474% ( 23) 00:07:49.289 17946.782 - 18047.606: 97.1337% ( 18) 00:07:49.289 18047.606 - 18148.431: 97.3200% ( 18) 00:07:49.289 18148.431 - 18249.255: 97.4545% ( 13) 00:07:49.289 18249.255 - 18350.080: 97.5786% ( 12) 00:07:49.289 18350.080 - 18450.905: 97.6925% ( 11) 00:07:49.289 18450.905 - 18551.729: 97.7856% ( 9) 00:07:49.289 18551.729 - 18652.554: 97.9201% ( 13) 00:07:49.289 18652.554 - 18753.378: 97.9408% ( 2) 00:07:49.289 18753.378 - 18854.203: 98.0339% ( 9) 00:07:49.289 18854.203 - 18955.028: 98.1064% ( 7) 00:07:49.289 18955.028 - 19055.852: 98.1478% ( 4) 00:07:49.289 19055.852 - 19156.677: 98.1788% ( 3) 00:07:49.289 19156.677 - 19257.502: 98.2202% ( 4) 00:07:49.289 19257.502 - 19358.326: 98.2616% ( 4) 00:07:49.289 19358.326 - 19459.151: 98.2823% ( 2) 00:07:49.289 19459.151 - 19559.975: 98.3133% ( 3) 00:07:49.289 19559.975 - 19660.800: 98.3547% ( 4) 00:07:49.289 19660.800 - 19761.625: 98.3961% ( 4) 00:07:49.289 19761.625 - 19862.449: 98.4272% ( 3) 00:07:49.289 19862.449 - 19963.274: 98.4789% ( 5) 00:07:49.289 19963.274 - 20064.098: 98.5099% ( 3) 00:07:49.289 20064.098 - 20164.923: 98.5410% ( 3) 00:07:49.289 20164.923 - 20265.748: 98.5824% ( 4) 00:07:49.289 20265.748 - 20366.572: 98.6238% ( 4) 00:07:49.289 20366.572 - 20467.397: 98.6755% ( 5) 00:07:49.289 22181.415 - 22282.240: 98.6962% ( 2) 00:07:49.289 22282.240 - 22383.065: 98.7272% ( 3) 00:07:49.289 22383.065 - 22483.889: 98.7686% ( 4) 00:07:49.289 22483.889 - 22584.714: 98.8100% ( 4) 00:07:49.289 22584.714 - 22685.538: 98.8307% ( 2) 00:07:49.289 22685.538 - 22786.363: 98.8618% ( 3) 00:07:49.289 22786.363 - 22887.188: 98.8825% ( 2) 00:07:49.289 22887.188 - 22988.012: 98.9342% ( 5) 00:07:49.289 22988.012 - 23088.837: 98.9652% ( 3) 00:07:49.289 23088.837 - 23189.662: 98.9963% ( 3) 00:07:49.289 23189.662 - 23290.486: 99.0170% ( 2) 00:07:49.289 23290.486 - 23391.311: 99.0480% ( 3) 00:07:49.289 23391.311 - 23492.135: 99.0791% ( 3) 00:07:49.289 23492.135 - 23592.960: 99.1101% ( 3) 00:07:49.289 23592.960 - 23693.785: 99.1411% ( 3) 00:07:49.289 23693.785 - 23794.609: 99.1722% ( 3) 00:07:49.289 23794.609 - 23895.434: 99.2032% ( 3) 00:07:49.289 23895.434 - 23996.258: 99.2343% ( 3) 00:07:49.289 23996.258 - 24097.083: 99.2550% ( 2) 00:07:49.289 24097.083 - 24197.908: 99.2964% ( 4) 00:07:49.289 24197.908 - 24298.732: 99.3067% ( 1) 00:07:49.289 24298.732 - 24399.557: 99.3377% ( 3) 00:07:49.289 29844.086 - 30045.735: 99.3998% ( 6) 00:07:49.289 30045.735 - 30247.385: 99.5033% ( 10) 00:07:49.289 30247.385 - 30449.034: 99.5964% ( 9) 00:07:49.289 30449.034 - 30650.683: 99.6896% ( 9) 00:07:49.289 30650.683 - 30852.332: 99.7930% ( 10) 00:07:49.289 30852.332 - 31053.982: 99.8862% ( 9) 00:07:49.289 31053.982 - 31255.631: 99.9897% ( 10) 00:07:49.289 31255.631 - 31457.280: 100.0000% ( 1) 00:07:49.289 00:07:49.289 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.289 ============================================================================== 00:07:49.289 Range in us Cumulative IO count 00:07:49.289 7864.320 - 7914.732: 0.0207% ( 2) 00:07:49.289 7914.732 - 7965.145: 0.0621% ( 4) 00:07:49.289 7965.145 - 8015.557: 0.0931% ( 3) 00:07:49.289 8015.557 - 8065.969: 0.1345% ( 4) 00:07:49.289 8065.969 - 8116.382: 0.1759% ( 4) 00:07:49.289 8116.382 - 8166.794: 0.2380% ( 6) 00:07:49.289 8166.794 - 8217.206: 0.2794% ( 4) 00:07:49.289 8217.206 - 8267.618: 0.3208% ( 4) 00:07:49.289 8267.618 - 8318.031: 0.3622% ( 4) 00:07:49.289 8318.031 - 8368.443: 0.4036% ( 4) 00:07:49.289 8368.443 - 8418.855: 0.4450% ( 4) 00:07:49.289 8418.855 - 8469.268: 0.4760% ( 3) 00:07:49.289 8469.268 - 8519.680: 0.5070% ( 3) 00:07:49.290 8519.680 - 8570.092: 0.5484% ( 4) 00:07:49.290 8570.092 - 8620.505: 0.5795% ( 3) 00:07:49.290 8620.505 - 8670.917: 0.6209% ( 4) 00:07:49.290 8670.917 - 8721.329: 0.6519% ( 3) 00:07:49.290 8721.329 - 8771.742: 0.6623% ( 1) 00:07:49.290 9225.452 - 9275.865: 0.6726% ( 1) 00:07:49.290 9326.277 - 9376.689: 0.6829% ( 1) 00:07:49.290 9376.689 - 9427.102: 0.7450% ( 6) 00:07:49.290 9427.102 - 9477.514: 0.8175% ( 7) 00:07:49.290 9477.514 - 9527.926: 0.8796% ( 6) 00:07:49.290 9527.926 - 9578.338: 1.1486% ( 26) 00:07:49.290 9578.338 - 9628.751: 1.2107% ( 6) 00:07:49.290 9628.751 - 9679.163: 1.2831% ( 7) 00:07:49.290 9679.163 - 9729.575: 1.3555% ( 7) 00:07:49.290 9729.575 - 9779.988: 1.4383% ( 8) 00:07:49.290 9779.988 - 9830.400: 1.5625% ( 12) 00:07:49.290 9830.400 - 9880.812: 1.6660% ( 10) 00:07:49.290 9880.812 - 9931.225: 1.7591% ( 9) 00:07:49.290 9931.225 - 9981.637: 1.8936% ( 13) 00:07:49.290 9981.637 - 10032.049: 1.9971% ( 10) 00:07:49.290 10032.049 - 10082.462: 2.1420% ( 14) 00:07:49.290 10082.462 - 10132.874: 2.3903% ( 24) 00:07:49.290 10132.874 - 10183.286: 2.5559% ( 16) 00:07:49.290 10183.286 - 10233.698: 2.8663% ( 30) 00:07:49.290 10233.698 - 10284.111: 3.1457% ( 27) 00:07:49.290 10284.111 - 10334.523: 3.4768% ( 32) 00:07:49.290 10334.523 - 10384.935: 3.8493% ( 36) 00:07:49.290 10384.935 - 10435.348: 4.4495% ( 58) 00:07:49.290 10435.348 - 10485.760: 5.1014% ( 63) 00:07:49.290 10485.760 - 10536.172: 5.5153% ( 40) 00:07:49.290 10536.172 - 10586.585: 6.0844% ( 55) 00:07:49.290 10586.585 - 10636.997: 6.4880% ( 39) 00:07:49.290 10636.997 - 10687.409: 6.8605% ( 36) 00:07:49.290 10687.409 - 10737.822: 7.2848% ( 41) 00:07:49.290 10737.822 - 10788.234: 7.8125% ( 51) 00:07:49.290 10788.234 - 10838.646: 8.3402% ( 51) 00:07:49.290 10838.646 - 10889.058: 8.9611% ( 60) 00:07:49.290 10889.058 - 10939.471: 9.5302% ( 55) 00:07:49.290 10939.471 - 10989.883: 10.0786% ( 53) 00:07:49.290 10989.883 - 11040.295: 10.7823% ( 68) 00:07:49.290 11040.295 - 11090.708: 11.4859% ( 68) 00:07:49.290 11090.708 - 11141.120: 12.4069% ( 89) 00:07:49.290 11141.120 - 11191.532: 13.0588% ( 63) 00:07:49.290 11191.532 - 11241.945: 13.6796% ( 60) 00:07:49.290 11241.945 - 11292.357: 14.4557% ( 75) 00:07:49.290 11292.357 - 11342.769: 15.4077% ( 92) 00:07:49.290 11342.769 - 11393.182: 16.2873% ( 85) 00:07:49.290 11393.182 - 11443.594: 16.9288% ( 62) 00:07:49.290 11443.594 - 11494.006: 17.7670% ( 81) 00:07:49.290 11494.006 - 11544.418: 18.5741% ( 78) 00:07:49.290 11544.418 - 11594.831: 19.1846% ( 59) 00:07:49.290 11594.831 - 11645.243: 19.8986% ( 69) 00:07:49.290 11645.243 - 11695.655: 20.5919% ( 67) 00:07:49.290 11695.655 - 11746.068: 21.4507% ( 83) 00:07:49.290 11746.068 - 11796.480: 22.1751% ( 70) 00:07:49.290 11796.480 - 11846.892: 22.9822% ( 78) 00:07:49.290 11846.892 - 11897.305: 23.8204% ( 81) 00:07:49.290 11897.305 - 11947.717: 24.7827% ( 93) 00:07:49.290 11947.717 - 11998.129: 25.8278% ( 101) 00:07:49.290 11998.129 - 12048.542: 26.7798% ( 92) 00:07:49.290 12048.542 - 12098.954: 27.9594% ( 114) 00:07:49.290 12098.954 - 12149.366: 28.9942% ( 100) 00:07:49.290 12149.366 - 12199.778: 29.9876% ( 96) 00:07:49.290 12199.778 - 12250.191: 31.2810% ( 125) 00:07:49.290 12250.191 - 12300.603: 32.6262% ( 130) 00:07:49.290 12300.603 - 12351.015: 33.9197% ( 125) 00:07:49.290 12351.015 - 12401.428: 35.1614% ( 120) 00:07:49.290 12401.428 - 12451.840: 36.9516% ( 173) 00:07:49.290 12451.840 - 12502.252: 38.4416% ( 144) 00:07:49.290 12502.252 - 12552.665: 39.8593% ( 137) 00:07:49.290 12552.665 - 12603.077: 41.1941% ( 129) 00:07:49.290 12603.077 - 12653.489: 42.5807% ( 134) 00:07:49.290 12653.489 - 12703.902: 43.8742% ( 125) 00:07:49.290 12703.902 - 12754.314: 45.3125% ( 139) 00:07:49.290 12754.314 - 12804.726: 46.5439% ( 119) 00:07:49.290 12804.726 - 12855.138: 47.8580% ( 127) 00:07:49.290 12855.138 - 12905.551: 49.1722% ( 127) 00:07:49.290 12905.551 - 13006.375: 52.1213% ( 285) 00:07:49.290 13006.375 - 13107.200: 54.3564% ( 216) 00:07:49.290 13107.200 - 13208.025: 56.5294% ( 210) 00:07:49.290 13208.025 - 13308.849: 59.1784% ( 256) 00:07:49.290 13308.849 - 13409.674: 61.7446% ( 248) 00:07:49.290 13409.674 - 13510.498: 63.9694% ( 215) 00:07:49.290 13510.498 - 13611.323: 65.9147% ( 188) 00:07:49.290 13611.323 - 13712.148: 67.8394% ( 186) 00:07:49.290 13712.148 - 13812.972: 69.5261% ( 163) 00:07:49.290 13812.972 - 13913.797: 70.9644% ( 139) 00:07:49.290 13913.797 - 14014.622: 72.2475% ( 124) 00:07:49.290 14014.622 - 14115.446: 73.6548% ( 136) 00:07:49.290 14115.446 - 14216.271: 75.2173% ( 151) 00:07:49.290 14216.271 - 14317.095: 76.8936% ( 162) 00:07:49.290 14317.095 - 14417.920: 78.1250% ( 119) 00:07:49.290 14417.920 - 14518.745: 79.2736% ( 111) 00:07:49.290 14518.745 - 14619.569: 80.5360% ( 122) 00:07:49.290 14619.569 - 14720.394: 81.8191% ( 124) 00:07:49.290 14720.394 - 14821.218: 82.9884% ( 113) 00:07:49.290 14821.218 - 14922.043: 83.9197% ( 90) 00:07:49.290 14922.043 - 15022.868: 84.6647% ( 72) 00:07:49.290 15022.868 - 15123.692: 85.4305% ( 74) 00:07:49.290 15123.692 - 15224.517: 86.3204% ( 86) 00:07:49.290 15224.517 - 15325.342: 87.3655% ( 101) 00:07:49.290 15325.342 - 15426.166: 88.4106% ( 101) 00:07:49.290 15426.166 - 15526.991: 89.3936% ( 95) 00:07:49.290 15526.991 - 15627.815: 90.2835% ( 86) 00:07:49.290 15627.815 - 15728.640: 90.9975% ( 69) 00:07:49.290 15728.640 - 15829.465: 91.5356% ( 52) 00:07:49.290 15829.465 - 15930.289: 92.0633% ( 51) 00:07:49.290 15930.289 - 16031.114: 92.5393% ( 46) 00:07:49.290 16031.114 - 16131.938: 92.8498% ( 30) 00:07:49.290 16131.938 - 16232.763: 93.0774% ( 22) 00:07:49.290 16232.763 - 16333.588: 93.3775% ( 29) 00:07:49.290 16333.588 - 16434.412: 93.6362% ( 25) 00:07:49.290 16434.412 - 16535.237: 93.8845% ( 24) 00:07:49.290 16535.237 - 16636.062: 94.2363% ( 34) 00:07:49.290 16636.062 - 16736.886: 94.5571% ( 31) 00:07:49.290 16736.886 - 16837.711: 94.9089% ( 34) 00:07:49.290 16837.711 - 16938.535: 95.2090% ( 29) 00:07:49.290 16938.535 - 17039.360: 95.4884% ( 27) 00:07:49.290 17039.360 - 17140.185: 95.8195% ( 32) 00:07:49.290 17140.185 - 17241.009: 96.0989% ( 27) 00:07:49.290 17241.009 - 17341.834: 96.3783% ( 27) 00:07:49.290 17341.834 - 17442.658: 96.6370% ( 25) 00:07:49.290 17442.658 - 17543.483: 96.7715% ( 13) 00:07:49.290 17543.483 - 17644.308: 96.9060% ( 13) 00:07:49.290 17644.308 - 17745.132: 97.0406% ( 13) 00:07:49.290 17745.132 - 17845.957: 97.1854% ( 14) 00:07:49.290 17845.957 - 17946.782: 97.3820% ( 19) 00:07:49.290 17946.782 - 18047.606: 97.5166% ( 13) 00:07:49.290 18047.606 - 18148.431: 97.6304% ( 11) 00:07:49.290 18148.431 - 18249.255: 97.7442% ( 11) 00:07:49.290 18249.255 - 18350.080: 97.8270% ( 8) 00:07:49.290 18350.080 - 18450.905: 97.8994% ( 7) 00:07:49.290 18450.905 - 18551.729: 97.9615% ( 6) 00:07:49.290 18551.729 - 18652.554: 97.9925% ( 3) 00:07:49.290 18652.554 - 18753.378: 98.0132% ( 2) 00:07:49.290 18753.378 - 18854.203: 98.0443% ( 3) 00:07:49.290 18854.203 - 18955.028: 98.1064% ( 6) 00:07:49.290 18955.028 - 19055.852: 98.1581% ( 5) 00:07:49.290 19055.852 - 19156.677: 98.2099% ( 5) 00:07:49.290 19156.677 - 19257.502: 98.2719% ( 6) 00:07:49.290 19257.502 - 19358.326: 98.3133% ( 4) 00:07:49.290 19358.326 - 19459.151: 98.3547% ( 4) 00:07:49.290 19459.151 - 19559.975: 98.3961% ( 4) 00:07:49.290 19559.975 - 19660.800: 98.4478% ( 5) 00:07:49.290 19660.800 - 19761.625: 98.4892% ( 4) 00:07:49.290 19761.625 - 19862.449: 98.5306% ( 4) 00:07:49.290 19862.449 - 19963.274: 98.5824% ( 5) 00:07:49.290 19963.274 - 20064.098: 98.6341% ( 5) 00:07:49.290 20064.098 - 20164.923: 98.6651% ( 3) 00:07:49.290 20164.923 - 20265.748: 98.6755% ( 1) 00:07:49.290 22584.714 - 22685.538: 98.7065% ( 3) 00:07:49.290 22685.538 - 22786.363: 98.7583% ( 5) 00:07:49.290 22786.363 - 22887.188: 98.8100% ( 5) 00:07:49.290 22887.188 - 22988.012: 98.8721% ( 6) 00:07:49.290 22988.012 - 23088.837: 98.9238% ( 5) 00:07:49.290 23088.837 - 23189.662: 98.9859% ( 6) 00:07:49.290 23189.662 - 23290.486: 99.0480% ( 6) 00:07:49.290 23290.486 - 23391.311: 99.0998% ( 5) 00:07:49.290 23391.311 - 23492.135: 99.1618% ( 6) 00:07:49.290 23492.135 - 23592.960: 99.2136% ( 5) 00:07:49.290 23592.960 - 23693.785: 99.2653% ( 5) 00:07:49.290 23693.785 - 23794.609: 99.3171% ( 5) 00:07:49.290 23794.609 - 23895.434: 99.3377% ( 2) 00:07:49.290 29440.788 - 29642.437: 99.4102% ( 7) 00:07:49.290 29642.437 - 29844.086: 99.5240% ( 11) 00:07:49.290 29844.086 - 30045.735: 99.6378% ( 11) 00:07:49.290 30045.735 - 30247.385: 99.7517% ( 11) 00:07:49.290 30247.385 - 30449.034: 99.8655% ( 11) 00:07:49.290 30449.034 - 30650.683: 99.9690% ( 10) 00:07:49.290 30650.683 - 30852.332: 100.0000% ( 3) 00:07:49.290 00:07:49.290 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.290 ============================================================================== 00:07:49.290 Range in us Cumulative IO count 00:07:49.290 6427.569 - 6452.775: 0.0207% ( 2) 00:07:49.290 6452.775 - 6503.188: 0.0724% ( 5) 00:07:49.290 6503.188 - 6553.600: 0.1345% ( 6) 00:07:49.290 6553.600 - 6604.012: 0.1759% ( 4) 00:07:49.290 6604.012 - 6654.425: 0.2070% ( 3) 00:07:49.290 6654.425 - 6704.837: 0.2483% ( 4) 00:07:49.290 6704.837 - 6755.249: 0.2794% ( 3) 00:07:49.290 6755.249 - 6805.662: 0.3208% ( 4) 00:07:49.290 6805.662 - 6856.074: 0.3622% ( 4) 00:07:49.290 6856.074 - 6906.486: 0.3932% ( 3) 00:07:49.290 6906.486 - 6956.898: 0.4346% ( 4) 00:07:49.290 6956.898 - 7007.311: 0.4760% ( 4) 00:07:49.290 7007.311 - 7057.723: 0.5070% ( 3) 00:07:49.290 7057.723 - 7108.135: 0.5484% ( 4) 00:07:49.290 7108.135 - 7158.548: 0.5898% ( 4) 00:07:49.290 7158.548 - 7208.960: 0.6312% ( 4) 00:07:49.290 7208.960 - 7259.372: 0.6623% ( 3) 00:07:49.290 9225.452 - 9275.865: 0.7243% ( 6) 00:07:49.291 9275.865 - 9326.277: 0.7968% ( 7) 00:07:49.291 9326.277 - 9376.689: 0.8589% ( 6) 00:07:49.291 9376.689 - 9427.102: 0.9830% ( 12) 00:07:49.291 9427.102 - 9477.514: 1.0348% ( 5) 00:07:49.291 9477.514 - 9527.926: 1.0658% ( 3) 00:07:49.291 9527.926 - 9578.338: 1.1382% ( 7) 00:07:49.291 9578.338 - 9628.751: 1.1796% ( 4) 00:07:49.291 9628.751 - 9679.163: 1.2210% ( 4) 00:07:49.291 9679.163 - 9729.575: 1.2728% ( 5) 00:07:49.291 9729.575 - 9779.988: 1.4073% ( 13) 00:07:49.291 9779.988 - 9830.400: 1.5418% ( 13) 00:07:49.291 9830.400 - 9880.812: 1.8626% ( 31) 00:07:49.291 9880.812 - 9931.225: 2.1109% ( 24) 00:07:49.291 9931.225 - 9981.637: 2.4110% ( 29) 00:07:49.291 9981.637 - 10032.049: 2.8870% ( 46) 00:07:49.291 10032.049 - 10082.462: 3.5493% ( 64) 00:07:49.291 10082.462 - 10132.874: 3.7666% ( 21) 00:07:49.291 10132.874 - 10183.286: 4.0356% ( 26) 00:07:49.291 10183.286 - 10233.698: 4.2943% ( 25) 00:07:49.291 10233.698 - 10284.111: 4.6875% ( 38) 00:07:49.291 10284.111 - 10334.523: 5.2049% ( 50) 00:07:49.291 10334.523 - 10384.935: 5.8568% ( 63) 00:07:49.291 10384.935 - 10435.348: 6.2707% ( 40) 00:07:49.291 10435.348 - 10485.760: 6.7260% ( 44) 00:07:49.291 10485.760 - 10536.172: 7.0778% ( 34) 00:07:49.291 10536.172 - 10586.585: 7.3675% ( 28) 00:07:49.291 10586.585 - 10636.997: 7.5745% ( 20) 00:07:49.291 10636.997 - 10687.409: 7.8228% ( 24) 00:07:49.291 10687.409 - 10737.822: 8.0608% ( 23) 00:07:49.291 10737.822 - 10788.234: 8.3299% ( 26) 00:07:49.291 10788.234 - 10838.646: 8.5265% ( 19) 00:07:49.291 10838.646 - 10889.058: 8.8266% ( 29) 00:07:49.291 10889.058 - 10939.471: 9.3233% ( 48) 00:07:49.291 10939.471 - 10989.883: 9.6854% ( 35) 00:07:49.291 10989.883 - 11040.295: 10.2546% ( 55) 00:07:49.291 11040.295 - 11090.708: 10.5960% ( 33) 00:07:49.291 11090.708 - 11141.120: 11.0927% ( 48) 00:07:49.291 11141.120 - 11191.532: 11.6825% ( 57) 00:07:49.291 11191.532 - 11241.945: 12.7483% ( 103) 00:07:49.291 11241.945 - 11292.357: 13.9176% ( 113) 00:07:49.291 11292.357 - 11342.769: 15.0559% ( 110) 00:07:49.291 11342.769 - 11393.182: 15.9251% ( 84) 00:07:49.291 11393.182 - 11443.594: 16.9185% ( 96) 00:07:49.291 11443.594 - 11494.006: 17.6842% ( 74) 00:07:49.291 11494.006 - 11544.418: 18.5844% ( 87) 00:07:49.291 11544.418 - 11594.831: 20.0021% ( 137) 00:07:49.291 11594.831 - 11645.243: 20.6747% ( 65) 00:07:49.291 11645.243 - 11695.655: 21.3990% ( 70) 00:07:49.291 11695.655 - 11746.068: 22.1854% ( 76) 00:07:49.291 11746.068 - 11796.480: 22.9201% ( 71) 00:07:49.291 11796.480 - 11846.892: 23.7893% ( 84) 00:07:49.291 11846.892 - 11897.305: 24.6378% ( 82) 00:07:49.291 11897.305 - 11947.717: 25.5691% ( 90) 00:07:49.291 11947.717 - 11998.129: 26.5418% ( 94) 00:07:49.291 11998.129 - 12048.542: 27.6387% ( 106) 00:07:49.291 12048.542 - 12098.954: 29.0666% ( 138) 00:07:49.291 12098.954 - 12149.366: 30.1118% ( 101) 00:07:49.291 12149.366 - 12199.778: 31.3638% ( 121) 00:07:49.291 12199.778 - 12250.191: 32.4814% ( 108) 00:07:49.291 12250.191 - 12300.603: 33.7852% ( 126) 00:07:49.291 12300.603 - 12351.015: 35.0062% ( 118) 00:07:49.291 12351.015 - 12401.428: 36.4342% ( 138) 00:07:49.291 12401.428 - 12451.840: 37.9036% ( 142) 00:07:49.291 12451.840 - 12502.252: 39.4350% ( 148) 00:07:49.291 12502.252 - 12552.665: 40.7595% ( 128) 00:07:49.291 12552.665 - 12603.077: 42.2910% ( 148) 00:07:49.291 12603.077 - 12653.489: 43.7707% ( 143) 00:07:49.291 12653.489 - 12703.902: 44.9917% ( 118) 00:07:49.291 12703.902 - 12754.314: 46.0679% ( 104) 00:07:49.291 12754.314 - 12804.726: 47.2061% ( 110) 00:07:49.291 12804.726 - 12855.138: 48.2823% ( 104) 00:07:49.291 12855.138 - 12905.551: 49.5344% ( 121) 00:07:49.291 12905.551 - 13006.375: 51.8729% ( 226) 00:07:49.291 13006.375 - 13107.200: 54.1287% ( 218) 00:07:49.291 13107.200 - 13208.025: 56.4466% ( 224) 00:07:49.291 13208.025 - 13308.849: 59.1474% ( 261) 00:07:49.291 13308.849 - 13409.674: 61.8274% ( 259) 00:07:49.291 13409.674 - 13510.498: 64.1039% ( 220) 00:07:49.291 13510.498 - 13611.323: 66.3700% ( 219) 00:07:49.291 13611.323 - 13712.148: 68.3050% ( 187) 00:07:49.291 13712.148 - 13812.972: 70.1780% ( 181) 00:07:49.291 13812.972 - 13913.797: 72.1026% ( 186) 00:07:49.291 13913.797 - 14014.622: 73.7169% ( 156) 00:07:49.291 14014.622 - 14115.446: 75.0931% ( 133) 00:07:49.291 14115.446 - 14216.271: 76.3555% ( 122) 00:07:49.291 14216.271 - 14317.095: 77.6076% ( 121) 00:07:49.291 14317.095 - 14417.920: 78.7666% ( 112) 00:07:49.291 14417.920 - 14518.745: 79.8945% ( 109) 00:07:49.291 14518.745 - 14619.569: 80.9499% ( 102) 00:07:49.291 14619.569 - 14720.394: 81.8812% ( 90) 00:07:49.291 14720.394 - 14821.218: 82.7711% ( 86) 00:07:49.291 14821.218 - 14922.043: 83.6610% ( 86) 00:07:49.291 14922.043 - 15022.868: 84.5406% ( 85) 00:07:49.291 15022.868 - 15123.692: 85.2959% ( 73) 00:07:49.291 15123.692 - 15224.517: 86.0410% ( 72) 00:07:49.291 15224.517 - 15325.342: 86.7964% ( 73) 00:07:49.291 15325.342 - 15426.166: 87.5414% ( 72) 00:07:49.291 15426.166 - 15526.991: 88.3278% ( 76) 00:07:49.291 15526.991 - 15627.815: 88.9694% ( 62) 00:07:49.291 15627.815 - 15728.640: 89.7144% ( 72) 00:07:49.291 15728.640 - 15829.465: 90.3456% ( 61) 00:07:49.291 15829.465 - 15930.289: 90.9768% ( 61) 00:07:49.291 15930.289 - 16031.114: 91.5770% ( 58) 00:07:49.291 16031.114 - 16131.938: 92.1151% ( 52) 00:07:49.291 16131.938 - 16232.763: 92.4669% ( 34) 00:07:49.291 16232.763 - 16333.588: 92.9532% ( 47) 00:07:49.291 16333.588 - 16434.412: 93.4706% ( 50) 00:07:49.291 16434.412 - 16535.237: 93.9673% ( 48) 00:07:49.291 16535.237 - 16636.062: 94.3398% ( 36) 00:07:49.291 16636.062 - 16736.886: 94.7537% ( 40) 00:07:49.291 16736.886 - 16837.711: 94.9917% ( 23) 00:07:49.291 16837.711 - 16938.535: 95.1676% ( 17) 00:07:49.291 16938.535 - 17039.360: 95.3435% ( 17) 00:07:49.291 17039.360 - 17140.185: 95.5091% ( 16) 00:07:49.291 17140.185 - 17241.009: 95.6333% ( 12) 00:07:49.291 17241.009 - 17341.834: 95.7471% ( 11) 00:07:49.291 17341.834 - 17442.658: 95.9230% ( 17) 00:07:49.291 17442.658 - 17543.483: 96.0679% ( 14) 00:07:49.291 17543.483 - 17644.308: 96.2127% ( 14) 00:07:49.291 17644.308 - 17745.132: 96.3887% ( 17) 00:07:49.291 17745.132 - 17845.957: 96.5853% ( 19) 00:07:49.291 17845.957 - 17946.782: 96.7922% ( 20) 00:07:49.291 17946.782 - 18047.606: 97.0095% ( 21) 00:07:49.291 18047.606 - 18148.431: 97.2165% ( 20) 00:07:49.291 18148.431 - 18249.255: 97.3510% ( 13) 00:07:49.291 18249.255 - 18350.080: 97.5373% ( 18) 00:07:49.291 18350.080 - 18450.905: 97.6511% ( 11) 00:07:49.291 18450.905 - 18551.729: 97.7649% ( 11) 00:07:49.291 18551.729 - 18652.554: 97.8580% ( 9) 00:07:49.291 18652.554 - 18753.378: 97.9408% ( 8) 00:07:49.291 18753.378 - 18854.203: 98.0339% ( 9) 00:07:49.291 18854.203 - 18955.028: 98.1374% ( 10) 00:07:49.291 18955.028 - 19055.852: 98.2202% ( 8) 00:07:49.291 19055.852 - 19156.677: 98.2926% ( 7) 00:07:49.291 19156.677 - 19257.502: 98.3651% ( 7) 00:07:49.291 19257.502 - 19358.326: 98.4375% ( 7) 00:07:49.291 19358.326 - 19459.151: 98.4789% ( 4) 00:07:49.291 19459.151 - 19559.975: 98.5306% ( 5) 00:07:49.291 19559.975 - 19660.800: 98.5824% ( 5) 00:07:49.291 19660.800 - 19761.625: 98.6341% ( 5) 00:07:49.291 19761.625 - 19862.449: 98.6755% ( 4) 00:07:49.291 23693.785 - 23794.609: 98.6962% ( 2) 00:07:49.291 23794.609 - 23895.434: 98.7479% ( 5) 00:07:49.291 23895.434 - 23996.258: 98.7997% ( 5) 00:07:49.291 23996.258 - 24097.083: 98.8618% ( 6) 00:07:49.291 24097.083 - 24197.908: 98.9135% ( 5) 00:07:49.291 24197.908 - 24298.732: 98.9756% ( 6) 00:07:49.291 24298.732 - 24399.557: 99.0377% ( 6) 00:07:49.291 24399.557 - 24500.382: 99.0894% ( 5) 00:07:49.291 24500.382 - 24601.206: 99.1411% ( 5) 00:07:49.291 24601.206 - 24702.031: 99.2032% ( 6) 00:07:49.291 24702.031 - 24802.855: 99.2653% ( 6) 00:07:49.291 24802.855 - 24903.680: 99.3171% ( 5) 00:07:49.291 24903.680 - 25004.505: 99.3377% ( 2) 00:07:49.291 30045.735 - 30247.385: 99.3791% ( 4) 00:07:49.291 30247.385 - 30449.034: 99.4826% ( 10) 00:07:49.291 30449.034 - 30650.683: 99.5964% ( 11) 00:07:49.291 30650.683 - 30852.332: 99.7103% ( 11) 00:07:49.291 30852.332 - 31053.982: 99.8344% ( 12) 00:07:49.291 31053.982 - 31255.631: 99.9483% ( 11) 00:07:49.291 31255.631 - 31457.280: 100.0000% ( 5) 00:07:49.291 00:07:49.292 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.292 ============================================================================== 00:07:49.292 Range in us Cumulative IO count 00:07:49.292 5444.529 - 5469.735: 0.0206% ( 2) 00:07:49.292 5469.735 - 5494.942: 0.0720% ( 5) 00:07:49.292 5494.942 - 5520.148: 0.0822% ( 1) 00:07:49.292 5520.148 - 5545.354: 0.1131% ( 3) 00:07:49.292 5545.354 - 5570.560: 0.1234% ( 1) 00:07:49.292 5570.560 - 5595.766: 0.1850% ( 6) 00:07:49.292 5595.766 - 5620.972: 0.2570% ( 7) 00:07:49.292 5620.972 - 5646.178: 0.3084% ( 5) 00:07:49.292 5646.178 - 5671.385: 0.3392% ( 3) 00:07:49.292 5671.385 - 5696.591: 0.3701% ( 3) 00:07:49.292 5696.591 - 5721.797: 0.4112% ( 4) 00:07:49.292 5721.797 - 5747.003: 0.4317% ( 2) 00:07:49.292 5747.003 - 5772.209: 0.4523% ( 2) 00:07:49.292 5772.209 - 5797.415: 0.4729% ( 2) 00:07:49.292 5797.415 - 5822.622: 0.4934% ( 2) 00:07:49.292 5822.622 - 5847.828: 0.5037% ( 1) 00:07:49.292 5847.828 - 5873.034: 0.5140% ( 1) 00:07:49.292 5873.034 - 5898.240: 0.5345% ( 2) 00:07:49.292 5898.240 - 5923.446: 0.5551% ( 2) 00:07:49.292 5923.446 - 5948.652: 0.5757% ( 2) 00:07:49.292 5948.652 - 5973.858: 0.5859% ( 1) 00:07:49.292 5973.858 - 5999.065: 0.6065% ( 2) 00:07:49.292 5999.065 - 6024.271: 0.6271% ( 2) 00:07:49.292 6024.271 - 6049.477: 0.6373% ( 1) 00:07:49.292 6049.477 - 6074.683: 0.6579% ( 2) 00:07:49.292 9124.628 - 9175.040: 0.6785% ( 2) 00:07:49.292 9175.040 - 9225.452: 0.7196% ( 4) 00:07:49.292 9225.452 - 9275.865: 0.7504% ( 3) 00:07:49.292 9275.865 - 9326.277: 0.8018% ( 5) 00:07:49.292 9326.277 - 9376.689: 0.8738% ( 7) 00:07:49.292 9376.689 - 9427.102: 1.1205% ( 24) 00:07:49.292 9427.102 - 9477.514: 1.2027% ( 8) 00:07:49.292 9477.514 - 9527.926: 1.2850% ( 8) 00:07:49.292 9527.926 - 9578.338: 1.3569% ( 7) 00:07:49.292 9578.338 - 9628.751: 1.4289% ( 7) 00:07:49.292 9628.751 - 9679.163: 1.6653% ( 23) 00:07:49.292 9679.163 - 9729.575: 1.7373% ( 7) 00:07:49.292 9729.575 - 9779.988: 1.8092% ( 7) 00:07:49.292 9779.988 - 9830.400: 1.8606% ( 5) 00:07:49.292 9830.400 - 9880.812: 1.9017% ( 4) 00:07:49.292 9880.812 - 9931.225: 1.9326% ( 3) 00:07:49.292 9931.225 - 9981.637: 1.9942% ( 6) 00:07:49.292 9981.637 - 10032.049: 2.0868% ( 9) 00:07:49.292 10032.049 - 10082.462: 2.1793% ( 9) 00:07:49.292 10082.462 - 10132.874: 2.3026% ( 12) 00:07:49.292 10132.874 - 10183.286: 2.4979% ( 19) 00:07:49.292 10183.286 - 10233.698: 2.8166% ( 31) 00:07:49.292 10233.698 - 10284.111: 3.1353% ( 31) 00:07:49.292 10284.111 - 10334.523: 3.8651% ( 71) 00:07:49.292 10334.523 - 10384.935: 4.2146% ( 34) 00:07:49.292 10384.935 - 10435.348: 5.1295% ( 89) 00:07:49.292 10435.348 - 10485.760: 5.5407% ( 40) 00:07:49.292 10485.760 - 10536.172: 5.8902% ( 34) 00:07:49.292 10536.172 - 10586.585: 6.2192% ( 32) 00:07:49.292 10586.585 - 10636.997: 6.5070% ( 28) 00:07:49.292 10636.997 - 10687.409: 6.9387% ( 42) 00:07:49.292 10687.409 - 10737.822: 7.3088% ( 36) 00:07:49.292 10737.822 - 10788.234: 7.6275% ( 31) 00:07:49.292 10788.234 - 10838.646: 8.2134% ( 57) 00:07:49.292 10838.646 - 10889.058: 8.7685% ( 54) 00:07:49.292 10889.058 - 10939.471: 9.3236% ( 54) 00:07:49.292 10939.471 - 10989.883: 9.9507% ( 61) 00:07:49.292 10989.883 - 11040.295: 10.9169% ( 94) 00:07:49.292 11040.295 - 11090.708: 11.9757% ( 103) 00:07:49.292 11090.708 - 11141.120: 12.9729% ( 97) 00:07:49.292 11141.120 - 11191.532: 13.8466% ( 85) 00:07:49.292 11191.532 - 11241.945: 14.7615% ( 89) 00:07:49.292 11241.945 - 11292.357: 15.7586% ( 97) 00:07:49.292 11292.357 - 11342.769: 16.3960% ( 62) 00:07:49.292 11342.769 - 11393.182: 17.1464% ( 73) 00:07:49.292 11393.182 - 11443.594: 17.8865% ( 72) 00:07:49.292 11443.594 - 11494.006: 18.7294% ( 82) 00:07:49.292 11494.006 - 11544.418: 19.5004% ( 75) 00:07:49.292 11544.418 - 11594.831: 20.3022% ( 78) 00:07:49.292 11594.831 - 11645.243: 21.0526% ( 73) 00:07:49.292 11645.243 - 11695.655: 22.1731% ( 109) 00:07:49.292 11695.655 - 11746.068: 23.1600% ( 96) 00:07:49.292 11746.068 - 11796.480: 24.1365% ( 95) 00:07:49.292 11796.480 - 11846.892: 25.0514% ( 89) 00:07:49.292 11846.892 - 11897.305: 26.2850% ( 120) 00:07:49.292 11897.305 - 11947.717: 27.5185% ( 120) 00:07:49.292 11947.717 - 11998.129: 28.6287% ( 108) 00:07:49.292 11998.129 - 12048.542: 29.4819% ( 83) 00:07:49.292 12048.542 - 12098.954: 30.5818% ( 107) 00:07:49.292 12098.954 - 12149.366: 31.4967% ( 89) 00:07:49.292 12149.366 - 12199.778: 32.3910% ( 87) 00:07:49.292 12199.778 - 12250.191: 33.2751% ( 86) 00:07:49.292 12250.191 - 12300.603: 34.4881% ( 118) 00:07:49.292 12300.603 - 12351.015: 35.6291% ( 111) 00:07:49.292 12351.015 - 12401.428: 36.8729% ( 121) 00:07:49.292 12401.428 - 12451.840: 38.3224% ( 141) 00:07:49.292 12451.840 - 12502.252: 39.7204% ( 136) 00:07:49.292 12502.252 - 12552.665: 40.9025% ( 115) 00:07:49.292 12552.665 - 12603.077: 42.1875% ( 125) 00:07:49.292 12603.077 - 12653.489: 43.5341% ( 131) 00:07:49.292 12653.489 - 12703.902: 44.9733% ( 140) 00:07:49.292 12703.902 - 12754.314: 46.6488% ( 163) 00:07:49.292 12754.314 - 12804.726: 47.7693% ( 109) 00:07:49.292 12804.726 - 12855.138: 49.0748% ( 127) 00:07:49.292 12855.138 - 12905.551: 50.2570% ( 115) 00:07:49.292 12905.551 - 13006.375: 52.4054% ( 209) 00:07:49.292 13006.375 - 13107.200: 54.7286% ( 226) 00:07:49.292 13107.200 - 13208.025: 57.0826% ( 229) 00:07:49.292 13208.025 - 13308.849: 59.0152% ( 188) 00:07:49.292 13308.849 - 13409.674: 60.9067% ( 184) 00:07:49.292 13409.674 - 13510.498: 63.2299% ( 226) 00:07:49.292 13510.498 - 13611.323: 65.7381% ( 244) 00:07:49.292 13611.323 - 13712.148: 67.7015% ( 191) 00:07:49.292 13712.148 - 13812.972: 69.3462% ( 160) 00:07:49.292 13812.972 - 13913.797: 70.8162% ( 143) 00:07:49.292 13913.797 - 14014.622: 72.5123% ( 165) 00:07:49.292 14014.622 - 14115.446: 74.3215% ( 176) 00:07:49.292 14115.446 - 14216.271: 75.9560% ( 159) 00:07:49.292 14216.271 - 14317.095: 77.2101% ( 122) 00:07:49.292 14317.095 - 14417.920: 78.2484% ( 101) 00:07:49.292 14417.920 - 14518.745: 79.5230% ( 124) 00:07:49.292 14518.745 - 14619.569: 80.7155% ( 116) 00:07:49.292 14619.569 - 14720.394: 81.8051% ( 106) 00:07:49.292 14720.394 - 14821.218: 82.9770% ( 114) 00:07:49.292 14821.218 - 14922.043: 83.8507% ( 85) 00:07:49.292 14922.043 - 15022.868: 84.7965% ( 92) 00:07:49.292 15022.868 - 15123.692: 85.6600% ( 84) 00:07:49.292 15123.692 - 15224.517: 86.4823% ( 80) 00:07:49.292 15224.517 - 15325.342: 87.2430% ( 74) 00:07:49.292 15325.342 - 15426.166: 87.9729% ( 71) 00:07:49.292 15426.166 - 15526.991: 88.5691% ( 58) 00:07:49.292 15526.991 - 15627.815: 89.2887% ( 70) 00:07:49.292 15627.815 - 15728.640: 89.9260% ( 62) 00:07:49.292 15728.640 - 15829.465: 90.5222% ( 58) 00:07:49.292 15829.465 - 15930.289: 91.1595% ( 62) 00:07:49.292 15930.289 - 16031.114: 91.8688% ( 69) 00:07:49.292 16031.114 - 16131.938: 92.4034% ( 52) 00:07:49.292 16131.938 - 16232.763: 92.8248% ( 41) 00:07:49.292 16232.763 - 16333.588: 93.1538% ( 32) 00:07:49.292 16333.588 - 16434.412: 93.4108% ( 25) 00:07:49.292 16434.412 - 16535.237: 93.6472% ( 23) 00:07:49.292 16535.237 - 16636.062: 94.0070% ( 35) 00:07:49.292 16636.062 - 16736.886: 94.4593% ( 44) 00:07:49.292 16736.886 - 16837.711: 94.7574% ( 29) 00:07:49.292 16837.711 - 16938.535: 95.0863% ( 32) 00:07:49.292 16938.535 - 17039.360: 95.2919% ( 20) 00:07:49.292 17039.360 - 17140.185: 95.4975% ( 20) 00:07:49.292 17140.185 - 17241.009: 95.7031% ( 20) 00:07:49.292 17241.009 - 17341.834: 95.8779% ( 17) 00:07:49.292 17341.834 - 17442.658: 96.0424% ( 16) 00:07:49.292 17442.658 - 17543.483: 96.1451% ( 10) 00:07:49.292 17543.483 - 17644.308: 96.3816% ( 23) 00:07:49.292 17644.308 - 17745.132: 96.6283% ( 24) 00:07:49.292 17745.132 - 17845.957: 96.8853% ( 25) 00:07:49.292 17845.957 - 17946.782: 97.1423% ( 25) 00:07:49.292 17946.782 - 18047.606: 97.4095% ( 26) 00:07:49.292 18047.606 - 18148.431: 97.6665% ( 25) 00:07:49.292 18148.431 - 18249.255: 97.8721% ( 20) 00:07:49.292 18249.255 - 18350.080: 98.1291% ( 25) 00:07:49.292 18350.080 - 18450.905: 98.4478% ( 31) 00:07:49.292 18450.905 - 18551.729: 98.6637% ( 21) 00:07:49.292 18551.729 - 18652.554: 98.8795% ( 21) 00:07:49.292 18652.554 - 18753.378: 99.0132% ( 13) 00:07:49.292 18753.378 - 18854.203: 99.1160% ( 10) 00:07:49.292 18854.203 - 18955.028: 99.2085% ( 9) 00:07:49.292 18955.028 - 19055.852: 99.2496% ( 4) 00:07:49.292 19055.852 - 19156.677: 99.3010% ( 5) 00:07:49.292 19156.677 - 19257.502: 99.3421% ( 4) 00:07:49.292 24399.557 - 24500.382: 99.3627% ( 2) 00:07:49.292 24500.382 - 24601.206: 99.4141% ( 5) 00:07:49.292 24601.206 - 24702.031: 99.4757% ( 6) 00:07:49.292 24702.031 - 24802.855: 99.5169% ( 4) 00:07:49.292 24802.855 - 24903.680: 99.5785% ( 6) 00:07:49.292 24903.680 - 25004.505: 99.6299% ( 5) 00:07:49.292 25004.505 - 25105.329: 99.6916% ( 6) 00:07:49.292 25105.329 - 25206.154: 99.7430% ( 5) 00:07:49.292 25206.154 - 25306.978: 99.7944% ( 5) 00:07:49.292 25306.978 - 25407.803: 99.8561% ( 6) 00:07:49.292 25407.803 - 25508.628: 99.9075% ( 5) 00:07:49.292 25508.628 - 25609.452: 99.9692% ( 6) 00:07:49.292 25609.452 - 25710.277: 100.0000% ( 3) 00:07:49.292 00:07:49.552 18:21:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:49.552 00:07:49.552 real 0m2.487s 00:07:49.552 user 0m2.176s 00:07:49.552 sys 0m0.194s 00:07:49.552 18:21:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.552 ************************************ 00:07:49.552 18:21:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.552 END TEST nvme_perf 00:07:49.552 ************************************ 00:07:49.552 18:21:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:49.552 18:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:49.552 18:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.552 18:21:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.552 ************************************ 00:07:49.552 START TEST nvme_hello_world 00:07:49.552 ************************************ 00:07:49.552 18:21:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:49.552 Initializing NVMe Controllers 00:07:49.552 Attached to 0000:00:11.0 00:07:49.552 Namespace ID: 1 size: 5GB 00:07:49.552 Attached to 0000:00:13.0 00:07:49.552 Namespace ID: 1 size: 1GB 00:07:49.552 Attached to 0000:00:10.0 00:07:49.552 Namespace ID: 1 size: 6GB 00:07:49.552 Attached to 0000:00:12.0 00:07:49.552 Namespace ID: 1 size: 4GB 00:07:49.552 Namespace ID: 2 size: 4GB 00:07:49.552 Namespace ID: 3 size: 4GB 00:07:49.552 Initialization complete. 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 INFO: using host memory buffer for IO 00:07:49.552 Hello world! 00:07:49.552 00:07:49.552 real 0m0.196s 00:07:49.552 user 0m0.083s 00:07:49.552 sys 0m0.075s 00:07:49.552 18:21:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.552 18:21:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:49.810 ************************************ 00:07:49.810 END TEST nvme_hello_world 00:07:49.810 ************************************ 00:07:49.810 18:21:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:49.810 18:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.810 18:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.810 18:21:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.810 ************************************ 00:07:49.810 START TEST nvme_sgl 00:07:49.810 ************************************ 00:07:49.810 18:21:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:49.810 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:49.810 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:49.810 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:50.068 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:50.068 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:50.068 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:50.068 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:50.068 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:50.068 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:50.068 NVMe Readv/Writev Request test 00:07:50.068 Attached to 0000:00:11.0 00:07:50.068 Attached to 0000:00:13.0 00:07:50.068 Attached to 0000:00:10.0 00:07:50.068 Attached to 0000:00:12.0 00:07:50.068 0000:00:11.0: build_io_request_2 test passed 00:07:50.068 0000:00:11.0: build_io_request_4 test passed 00:07:50.068 0000:00:11.0: build_io_request_5 test passed 00:07:50.068 0000:00:11.0: build_io_request_6 test passed 00:07:50.068 0000:00:11.0: build_io_request_7 test passed 00:07:50.068 0000:00:11.0: build_io_request_10 test passed 00:07:50.068 0000:00:10.0: build_io_request_2 test passed 00:07:50.068 0000:00:10.0: build_io_request_4 test passed 00:07:50.068 0000:00:10.0: build_io_request_5 test passed 00:07:50.068 0000:00:10.0: build_io_request_6 test passed 00:07:50.068 0000:00:10.0: build_io_request_7 test passed 00:07:50.068 0000:00:10.0: build_io_request_10 test passed 00:07:50.068 Cleaning up... 00:07:50.068 00:07:50.068 real 0m0.279s 00:07:50.068 user 0m0.135s 00:07:50.068 sys 0m0.098s 00:07:50.068 18:21:09 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.068 18:21:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:50.068 ************************************ 00:07:50.068 END TEST nvme_sgl 00:07:50.068 ************************************ 00:07:50.068 18:21:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:50.068 18:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.068 18:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.068 18:21:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.068 ************************************ 00:07:50.068 START TEST nvme_e2edp 00:07:50.068 ************************************ 00:07:50.068 18:21:09 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:50.325 NVMe Write/Read with End-to-End data protection test 00:07:50.325 Attached to 0000:00:11.0 00:07:50.325 Attached to 0000:00:13.0 00:07:50.325 Attached to 0000:00:10.0 00:07:50.325 Attached to 0000:00:12.0 00:07:50.325 Cleaning up... 00:07:50.325 00:07:50.325 real 0m0.190s 00:07:50.325 user 0m0.060s 00:07:50.325 sys 0m0.088s 00:07:50.325 18:21:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.325 18:21:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 ************************************ 00:07:50.325 END TEST nvme_e2edp 00:07:50.325 ************************************ 00:07:50.325 18:21:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:50.325 18:21:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.325 18:21:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.325 18:21:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 ************************************ 00:07:50.325 START TEST nvme_reserve 00:07:50.325 ************************************ 00:07:50.325 18:21:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:50.582 ===================================================== 00:07:50.582 NVMe Controller at PCI bus 0, device 17, function 0 00:07:50.582 ===================================================== 00:07:50.582 Reservations: Not Supported 00:07:50.582 ===================================================== 00:07:50.582 NVMe Controller at PCI bus 0, device 19, function 0 00:07:50.582 ===================================================== 00:07:50.582 Reservations: Not Supported 00:07:50.582 ===================================================== 00:07:50.582 NVMe Controller at PCI bus 0, device 16, function 0 00:07:50.582 ===================================================== 00:07:50.582 Reservations: Not Supported 00:07:50.582 ===================================================== 00:07:50.582 NVMe Controller at PCI bus 0, device 18, function 0 00:07:50.582 ===================================================== 00:07:50.582 Reservations: Not Supported 00:07:50.582 Reservation test passed 00:07:50.582 00:07:50.582 real 0m0.192s 00:07:50.582 user 0m0.068s 00:07:50.582 sys 0m0.079s 00:07:50.582 ************************************ 00:07:50.582 END TEST nvme_reserve 00:07:50.582 ************************************ 00:07:50.582 18:21:10 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.582 18:21:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 18:21:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:50.582 18:21:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.583 18:21:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.583 18:21:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.583 ************************************ 00:07:50.583 START TEST nvme_err_injection 00:07:50.583 ************************************ 00:07:50.583 18:21:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:50.840 NVMe Error Injection test 00:07:50.840 Attached to 0000:00:11.0 00:07:50.840 Attached to 0000:00:13.0 00:07:50.840 Attached to 0000:00:10.0 00:07:50.840 Attached to 0000:00:12.0 00:07:50.840 0000:00:13.0: get features failed as expected 00:07:50.840 0000:00:10.0: get features failed as expected 00:07:50.840 0000:00:12.0: get features failed as expected 00:07:50.840 0000:00:11.0: get features failed as expected 00:07:50.840 0000:00:11.0: get features successfully as expected 00:07:50.840 0000:00:13.0: get features successfully as expected 00:07:50.840 0000:00:10.0: get features successfully as expected 00:07:50.840 0000:00:12.0: get features successfully as expected 00:07:50.840 0000:00:11.0: read failed as expected 00:07:50.840 0000:00:13.0: read failed as expected 00:07:50.840 0000:00:10.0: read failed as expected 00:07:50.840 0000:00:12.0: read failed as expected 00:07:50.840 0000:00:12.0: read successfully as expected 00:07:50.840 0000:00:11.0: read successfully as expected 00:07:50.840 0000:00:13.0: read successfully as expected 00:07:50.840 0000:00:10.0: read successfully as expected 00:07:50.840 Cleaning up... 00:07:50.840 00:07:50.840 real 0m0.203s 00:07:50.840 user 0m0.071s 00:07:50.840 sys 0m0.089s 00:07:50.840 ************************************ 00:07:50.840 END TEST nvme_err_injection 00:07:50.840 ************************************ 00:07:50.840 18:21:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.840 18:21:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:50.840 18:21:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:50.840 18:21:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:50.840 18:21:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.840 18:21:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.840 ************************************ 00:07:50.840 START TEST nvme_overhead 00:07:50.840 ************************************ 00:07:50.840 18:21:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.212 Initializing NVMe Controllers 00:07:52.212 Attached to 0000:00:11.0 00:07:52.212 Attached to 0000:00:13.0 00:07:52.212 Attached to 0000:00:10.0 00:07:52.212 Attached to 0000:00:12.0 00:07:52.212 Initialization complete. Launching workers. 00:07:52.212 submit (in ns) avg, min, max = 11290.1, 9865.4, 62600.0 00:07:52.212 complete (in ns) avg, min, max = 7519.5, 7209.2, 34003.1 00:07:52.212 00:07:52.212 Submit histogram 00:07:52.212 ================ 00:07:52.212 Range in us Cumulative Count 00:07:52.212 9.846 - 9.895: 0.0058% ( 1) 00:07:52.212 10.092 - 10.142: 0.0117% ( 1) 00:07:52.212 10.142 - 10.191: 0.0175% ( 1) 00:07:52.212 10.535 - 10.585: 0.0234% ( 1) 00:07:52.212 10.732 - 10.782: 0.0292% ( 1) 00:07:52.212 10.782 - 10.831: 0.0934% ( 11) 00:07:52.212 10.831 - 10.880: 0.9345% ( 144) 00:07:52.212 10.880 - 10.929: 4.8826% ( 676) 00:07:52.212 10.929 - 10.978: 17.3111% ( 2128) 00:07:52.212 10.978 - 11.028: 37.2036% ( 3406) 00:07:52.212 11.028 - 11.077: 56.3953% ( 3286) 00:07:52.212 11.077 - 11.126: 69.0048% ( 2159) 00:07:52.212 11.126 - 11.175: 76.5740% ( 1296) 00:07:52.212 11.175 - 11.225: 80.6857% ( 704) 00:07:52.212 11.225 - 11.274: 83.1562% ( 423) 00:07:52.212 11.274 - 11.323: 84.7389% ( 271) 00:07:52.212 11.323 - 11.372: 85.7668% ( 176) 00:07:52.212 11.372 - 11.422: 86.4327% ( 114) 00:07:52.212 11.422 - 11.471: 86.9700% ( 92) 00:07:52.212 11.471 - 11.520: 87.4197% ( 77) 00:07:52.212 11.520 - 11.569: 87.8285% ( 70) 00:07:52.212 11.569 - 11.618: 88.1614% ( 57) 00:07:52.212 11.618 - 11.668: 88.4768% ( 54) 00:07:52.212 11.668 - 11.717: 88.8564% ( 65) 00:07:52.212 11.717 - 11.766: 89.3062% ( 77) 00:07:52.212 11.766 - 11.815: 89.9661% ( 113) 00:07:52.212 11.815 - 11.865: 90.7663% ( 137) 00:07:52.212 11.865 - 11.914: 91.5547% ( 135) 00:07:52.212 11.914 - 11.963: 92.3432% ( 135) 00:07:52.212 11.963 - 12.012: 93.1316% ( 135) 00:07:52.212 12.012 - 12.062: 93.8442% ( 122) 00:07:52.212 12.062 - 12.111: 94.3581% ( 88) 00:07:52.212 12.111 - 12.160: 94.8604% ( 86) 00:07:52.212 12.160 - 12.209: 95.3744% ( 88) 00:07:52.212 12.209 - 12.258: 95.6898% ( 54) 00:07:52.212 12.258 - 12.308: 96.0694% ( 65) 00:07:52.212 12.308 - 12.357: 96.2329% ( 28) 00:07:52.212 12.357 - 12.406: 96.4257% ( 33) 00:07:52.212 12.406 - 12.455: 96.5833% ( 27) 00:07:52.212 12.455 - 12.505: 96.7352% ( 26) 00:07:52.212 12.505 - 12.554: 96.8111% ( 13) 00:07:52.212 12.554 - 12.603: 96.8637% ( 9) 00:07:52.212 12.603 - 12.702: 96.9513% ( 15) 00:07:52.212 12.702 - 12.800: 97.0214% ( 12) 00:07:52.212 12.800 - 12.898: 97.1382% ( 20) 00:07:52.212 12.898 - 12.997: 97.4010% ( 45) 00:07:52.212 12.997 - 13.095: 97.6638% ( 45) 00:07:52.212 13.095 - 13.194: 97.8799% ( 37) 00:07:52.212 13.194 - 13.292: 98.0143% ( 23) 00:07:52.212 13.292 - 13.391: 98.0902% ( 13) 00:07:52.212 13.391 - 13.489: 98.1194% ( 5) 00:07:52.212 13.489 - 13.588: 98.1369% ( 3) 00:07:52.212 13.588 - 13.686: 98.1486% ( 2) 00:07:52.212 13.686 - 13.785: 98.1895% ( 7) 00:07:52.212 13.785 - 13.883: 98.2187% ( 5) 00:07:52.212 13.883 - 13.982: 98.2595% ( 7) 00:07:52.212 13.982 - 14.080: 98.2712% ( 2) 00:07:52.212 14.080 - 14.178: 98.2888% ( 3) 00:07:52.212 14.178 - 14.277: 98.3121% ( 4) 00:07:52.212 14.277 - 14.375: 98.3413% ( 5) 00:07:52.212 14.375 - 14.474: 98.3530% ( 2) 00:07:52.212 14.474 - 14.572: 98.3705% ( 3) 00:07:52.212 14.572 - 14.671: 98.4172% ( 8) 00:07:52.212 14.671 - 14.769: 98.4581% ( 7) 00:07:52.212 14.769 - 14.868: 98.5107% ( 9) 00:07:52.212 14.868 - 14.966: 98.5224% ( 2) 00:07:52.212 14.966 - 15.065: 98.5516% ( 5) 00:07:52.212 15.065 - 15.163: 98.5808% ( 5) 00:07:52.212 15.163 - 15.262: 98.6100% ( 5) 00:07:52.212 15.262 - 15.360: 98.6217% ( 2) 00:07:52.212 15.360 - 15.458: 98.6625% ( 7) 00:07:52.212 15.458 - 15.557: 98.6917% ( 5) 00:07:52.212 15.557 - 15.655: 98.7151% ( 4) 00:07:52.212 15.655 - 15.754: 98.7209% ( 1) 00:07:52.212 15.754 - 15.852: 98.7326% ( 2) 00:07:52.212 15.852 - 15.951: 98.7443% ( 2) 00:07:52.212 15.951 - 16.049: 98.7560% ( 2) 00:07:52.212 16.049 - 16.148: 98.7735% ( 3) 00:07:52.212 16.246 - 16.345: 98.7852% ( 2) 00:07:52.212 16.345 - 16.443: 98.7969% ( 2) 00:07:52.212 16.443 - 16.542: 98.8086% ( 2) 00:07:52.212 16.542 - 16.640: 98.8144% ( 1) 00:07:52.212 16.640 - 16.738: 98.8845% ( 12) 00:07:52.212 16.738 - 16.837: 98.9487% ( 11) 00:07:52.212 16.837 - 16.935: 99.0422% ( 16) 00:07:52.212 16.935 - 17.034: 99.1590% ( 20) 00:07:52.212 17.034 - 17.132: 99.2291% ( 12) 00:07:52.212 17.132 - 17.231: 99.3283% ( 17) 00:07:52.212 17.231 - 17.329: 99.3692% ( 7) 00:07:52.212 17.329 - 17.428: 99.4276% ( 10) 00:07:52.212 17.428 - 17.526: 99.4860% ( 10) 00:07:52.212 17.526 - 17.625: 99.5561% ( 12) 00:07:52.212 17.625 - 17.723: 99.6145% ( 10) 00:07:52.212 17.723 - 17.822: 99.6321% ( 3) 00:07:52.212 17.822 - 17.920: 99.6671% ( 6) 00:07:52.212 17.920 - 18.018: 99.7080% ( 7) 00:07:52.212 18.018 - 18.117: 99.7197% ( 2) 00:07:52.212 18.117 - 18.215: 99.7489% ( 5) 00:07:52.212 18.215 - 18.314: 99.7664% ( 3) 00:07:52.212 18.314 - 18.412: 99.7781% ( 2) 00:07:52.212 18.412 - 18.511: 99.7839% ( 1) 00:07:52.212 18.511 - 18.609: 99.7956% ( 2) 00:07:52.212 18.609 - 18.708: 99.8131% ( 3) 00:07:52.212 18.905 - 19.003: 99.8189% ( 1) 00:07:52.212 19.102 - 19.200: 99.8248% ( 1) 00:07:52.212 19.200 - 19.298: 99.8306% ( 1) 00:07:52.212 19.298 - 19.397: 99.8423% ( 2) 00:07:52.212 19.397 - 19.495: 99.8481% ( 1) 00:07:52.212 19.594 - 19.692: 99.8540% ( 1) 00:07:52.212 19.692 - 19.791: 99.8657% ( 2) 00:07:52.212 19.988 - 20.086: 99.8715% ( 1) 00:07:52.212 20.382 - 20.480: 99.8774% ( 1) 00:07:52.212 20.480 - 20.578: 99.8832% ( 1) 00:07:52.212 20.874 - 20.972: 99.8890% ( 1) 00:07:52.212 20.972 - 21.071: 99.8949% ( 1) 00:07:52.212 21.465 - 21.563: 99.9066% ( 2) 00:07:52.212 21.563 - 21.662: 99.9182% ( 2) 00:07:52.212 21.957 - 22.055: 99.9241% ( 1) 00:07:52.212 22.942 - 23.040: 99.9299% ( 1) 00:07:52.212 23.729 - 23.828: 99.9416% ( 2) 00:07:52.212 24.123 - 24.222: 99.9474% ( 1) 00:07:52.212 24.615 - 24.714: 99.9533% ( 1) 00:07:52.212 27.175 - 27.372: 99.9591% ( 1) 00:07:52.212 32.492 - 32.689: 99.9650% ( 1) 00:07:52.212 37.022 - 37.218: 99.9708% ( 1) 00:07:52.212 39.385 - 39.582: 99.9766% ( 1) 00:07:52.212 39.975 - 40.172: 99.9825% ( 1) 00:07:52.212 61.046 - 61.440: 99.9883% ( 1) 00:07:52.212 61.440 - 61.834: 99.9942% ( 1) 00:07:52.212 62.228 - 62.622: 100.0000% ( 1) 00:07:52.212 00:07:52.212 Complete histogram 00:07:52.212 ================== 00:07:52.212 Range in us Cumulative Count 00:07:52.212 7.188 - 7.237: 0.0350% ( 6) 00:07:52.212 7.237 - 7.286: 1.5652% ( 262) 00:07:52.212 7.286 - 7.335: 14.8289% ( 2271) 00:07:52.212 7.335 - 7.385: 42.2614% ( 4697) 00:07:52.212 7.385 - 7.434: 68.1287% ( 4429) 00:07:52.212 7.434 - 7.483: 83.5650% ( 2643) 00:07:52.212 7.483 - 7.532: 91.1693% ( 1302) 00:07:52.212 7.532 - 7.582: 94.6560% ( 597) 00:07:52.212 7.582 - 7.631: 96.3439% ( 289) 00:07:52.212 7.631 - 7.680: 97.1323% ( 135) 00:07:52.212 7.680 - 7.729: 97.6697% ( 92) 00:07:52.212 7.729 - 7.778: 97.9442% ( 47) 00:07:52.212 7.778 - 7.828: 98.0668% ( 21) 00:07:52.212 7.828 - 7.877: 98.1311% ( 11) 00:07:52.212 7.877 - 7.926: 98.1603% ( 5) 00:07:52.212 7.926 - 7.975: 98.1719% ( 2) 00:07:52.212 7.975 - 8.025: 98.1895% ( 3) 00:07:52.212 8.025 - 8.074: 98.2187% ( 5) 00:07:52.212 8.074 - 8.123: 98.2362% ( 3) 00:07:52.212 8.123 - 8.172: 98.2537% ( 3) 00:07:52.212 8.172 - 8.222: 98.2595% ( 1) 00:07:52.212 8.222 - 8.271: 98.2654% ( 1) 00:07:52.212 8.271 - 8.320: 98.2712% ( 1) 00:07:52.212 8.320 - 8.369: 98.2771% ( 1) 00:07:52.212 8.468 - 8.517: 98.2946% ( 3) 00:07:52.212 8.566 - 8.615: 98.3063% ( 2) 00:07:52.212 8.615 - 8.665: 98.3121% ( 1) 00:07:52.212 8.714 - 8.763: 98.3238% ( 2) 00:07:52.212 9.009 - 9.058: 98.3296% ( 1) 00:07:52.212 9.354 - 9.403: 98.3413% ( 2) 00:07:52.212 9.502 - 9.551: 98.3472% ( 1) 00:07:52.212 9.994 - 10.043: 98.3588% ( 2) 00:07:52.212 10.191 - 10.240: 98.3647% ( 1) 00:07:52.212 10.831 - 10.880: 98.3705% ( 1) 00:07:52.212 11.077 - 11.126: 98.3764% ( 1) 00:07:52.212 11.126 - 11.175: 98.3822% ( 1) 00:07:52.212 11.372 - 11.422: 98.3880% ( 1) 00:07:52.212 11.422 - 11.471: 98.3939% ( 1) 00:07:52.212 11.471 - 11.520: 98.3997% ( 1) 00:07:52.212 11.520 - 11.569: 98.4056% ( 1) 00:07:52.212 11.865 - 11.914: 98.4114% ( 1) 00:07:52.212 11.914 - 11.963: 98.4172% ( 1) 00:07:52.212 12.160 - 12.209: 98.4231% ( 1) 00:07:52.212 12.258 - 12.308: 98.4289% ( 1) 00:07:52.212 12.308 - 12.357: 98.4406% ( 2) 00:07:52.212 12.406 - 12.455: 98.4464% ( 1) 00:07:52.212 12.455 - 12.505: 98.4581% ( 2) 00:07:52.212 12.505 - 12.554: 98.4640% ( 1) 00:07:52.212 12.554 - 12.603: 98.4698% ( 1) 00:07:52.212 12.603 - 12.702: 98.4932% ( 4) 00:07:52.212 12.702 - 12.800: 98.4990% ( 1) 00:07:52.212 12.800 - 12.898: 98.5048% ( 1) 00:07:52.212 12.898 - 12.997: 98.5516% ( 8) 00:07:52.212 12.997 - 13.095: 98.6742% ( 21) 00:07:52.212 13.095 - 13.194: 98.7735% ( 17) 00:07:52.212 13.194 - 13.292: 98.8436% ( 12) 00:07:52.212 13.292 - 13.391: 98.9312% ( 15) 00:07:52.212 13.391 - 13.489: 99.0422% ( 19) 00:07:52.212 13.489 - 13.588: 99.1882% ( 25) 00:07:52.212 13.588 - 13.686: 99.3225% ( 23) 00:07:52.212 13.686 - 13.785: 99.4335% ( 19) 00:07:52.212 13.785 - 13.883: 99.5328% ( 17) 00:07:52.212 13.883 - 13.982: 99.6321% ( 17) 00:07:52.212 13.982 - 14.080: 99.6905% ( 10) 00:07:52.212 14.080 - 14.178: 99.7255% ( 6) 00:07:52.212 14.178 - 14.277: 99.7664% ( 7) 00:07:52.212 14.277 - 14.375: 99.7897% ( 4) 00:07:52.212 14.375 - 14.474: 99.8014% ( 2) 00:07:52.212 14.474 - 14.572: 99.8131% ( 2) 00:07:52.212 14.572 - 14.671: 99.8481% ( 6) 00:07:52.212 14.671 - 14.769: 99.8540% ( 1) 00:07:52.212 15.163 - 15.262: 99.8598% ( 1) 00:07:52.212 15.852 - 15.951: 99.8657% ( 1) 00:07:52.212 16.049 - 16.148: 99.8774% ( 2) 00:07:52.212 16.148 - 16.246: 99.8832% ( 1) 00:07:52.212 16.345 - 16.443: 99.8890% ( 1) 00:07:52.213 16.640 - 16.738: 99.8949% ( 1) 00:07:52.213 16.935 - 17.034: 99.9007% ( 1) 00:07:52.213 17.132 - 17.231: 99.9124% ( 2) 00:07:52.213 17.231 - 17.329: 99.9182% ( 1) 00:07:52.213 17.428 - 17.526: 99.9358% ( 3) 00:07:52.213 17.723 - 17.822: 99.9416% ( 1) 00:07:52.213 17.920 - 18.018: 99.9474% ( 1) 00:07:52.213 18.018 - 18.117: 99.9533% ( 1) 00:07:52.213 18.314 - 18.412: 99.9591% ( 1) 00:07:52.213 18.905 - 19.003: 99.9650% ( 1) 00:07:52.213 19.397 - 19.495: 99.9708% ( 1) 00:07:52.213 21.563 - 21.662: 99.9766% ( 1) 00:07:52.213 25.403 - 25.600: 99.9825% ( 1) 00:07:52.213 32.492 - 32.689: 99.9883% ( 1) 00:07:52.213 33.477 - 33.674: 99.9942% ( 1) 00:07:52.213 33.871 - 34.068: 100.0000% ( 1) 00:07:52.213 00:07:52.213 ************************************ 00:07:52.213 END TEST nvme_overhead 00:07:52.213 ************************************ 00:07:52.213 00:07:52.213 real 0m1.212s 00:07:52.213 user 0m1.069s 00:07:52.213 sys 0m0.087s 00:07:52.213 18:21:11 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.213 18:21:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:52.213 18:21:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:52.213 18:21:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:52.213 18:21:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.213 18:21:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.213 ************************************ 00:07:52.213 START TEST nvme_arbitration 00:07:52.213 ************************************ 00:07:52.213 18:21:11 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:55.485 Initializing NVMe Controllers 00:07:55.486 Attached to 0000:00:11.0 00:07:55.486 Attached to 0000:00:13.0 00:07:55.486 Attached to 0000:00:10.0 00:07:55.486 Attached to 0000:00:12.0 00:07:55.486 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:55.486 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:55.486 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:55.486 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:55.486 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:55.486 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:55.486 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:55.486 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:55.486 Initialization complete. Launching workers. 00:07:55.486 Starting thread on core 1 with urgent priority queue 00:07:55.486 Starting thread on core 2 with urgent priority queue 00:07:55.486 Starting thread on core 3 with urgent priority queue 00:07:55.486 Starting thread on core 0 with urgent priority queue 00:07:55.486 QEMU NVMe Ctrl (12341 ) core 0: 6357.33 IO/s 15.73 secs/100000 ios 00:07:55.486 QEMU NVMe Ctrl (12342 ) core 0: 6357.33 IO/s 15.73 secs/100000 ios 00:07:55.486 QEMU NVMe Ctrl (12343 ) core 1: 6101.33 IO/s 16.39 secs/100000 ios 00:07:55.486 QEMU NVMe Ctrl (12342 ) core 1: 6101.33 IO/s 16.39 secs/100000 ios 00:07:55.486 QEMU NVMe Ctrl (12340 ) core 2: 5397.33 IO/s 18.53 secs/100000 ios 00:07:55.486 QEMU NVMe Ctrl (12342 ) core 3: 6058.67 IO/s 16.51 secs/100000 ios 00:07:55.486 ======================================================== 00:07:55.486 00:07:55.486 ************************************ 00:07:55.486 END TEST nvme_arbitration 00:07:55.486 ************************************ 00:07:55.486 00:07:55.486 real 0m3.218s 00:07:55.486 user 0m9.013s 00:07:55.486 sys 0m0.110s 00:07:55.486 18:21:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.486 18:21:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:55.486 18:21:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.486 ************************************ 00:07:55.486 START TEST nvme_single_aen 00:07:55.486 ************************************ 00:07:55.486 18:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:55.486 Asynchronous Event Request test 00:07:55.486 Attached to 0000:00:11.0 00:07:55.486 Attached to 0000:00:13.0 00:07:55.486 Attached to 0000:00:10.0 00:07:55.486 Attached to 0000:00:12.0 00:07:55.486 Reset controller to setup AER completions for this process 00:07:55.486 Registering asynchronous event callbacks... 00:07:55.486 Getting orig temperature thresholds of all controllers 00:07:55.486 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.486 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.486 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.486 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.486 Setting all controllers temperature threshold low to trigger AER 00:07:55.486 Waiting for all controllers temperature threshold to be set lower 00:07:55.486 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.486 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:55.486 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.486 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:55.486 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.486 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:55.486 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.486 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:55.486 Waiting for all controllers to trigger AER and reset threshold 00:07:55.486 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.486 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.486 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.486 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.486 Cleaning up... 00:07:55.486 00:07:55.486 real 0m0.193s 00:07:55.486 user 0m0.060s 00:07:55.486 sys 0m0.096s 00:07:55.486 ************************************ 00:07:55.486 END TEST nvme_single_aen 00:07:55.486 ************************************ 00:07:55.486 18:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.486 18:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:55.486 18:21:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.486 18:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.486 ************************************ 00:07:55.486 START TEST nvme_doorbell_aers 00:07:55.486 ************************************ 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:55.486 18:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:55.745 [2024-11-29 18:21:15.570056] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:05.713 Executing: test_write_invalid_db 00:08:05.713 Waiting for AER completion... 00:08:05.713 Failure: test_write_invalid_db 00:08:05.713 00:08:05.713 Executing: test_invalid_db_write_overflow_sq 00:08:05.713 Waiting for AER completion... 00:08:05.713 Failure: test_invalid_db_write_overflow_sq 00:08:05.713 00:08:05.713 Executing: test_invalid_db_write_overflow_cq 00:08:05.713 Waiting for AER completion... 00:08:05.713 Failure: test_invalid_db_write_overflow_cq 00:08:05.713 00:08:05.713 18:21:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:05.713 18:21:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:05.971 [2024-11-29 18:21:25.621274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:15.945 Executing: test_write_invalid_db 00:08:15.945 Waiting for AER completion... 00:08:15.945 Failure: test_write_invalid_db 00:08:15.945 00:08:15.945 Executing: test_invalid_db_write_overflow_sq 00:08:15.945 Waiting for AER completion... 00:08:15.945 Failure: test_invalid_db_write_overflow_sq 00:08:15.945 00:08:15.945 Executing: test_invalid_db_write_overflow_cq 00:08:15.945 Waiting for AER completion... 00:08:15.945 Failure: test_invalid_db_write_overflow_cq 00:08:15.945 00:08:15.945 18:21:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:15.945 18:21:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:15.945 [2024-11-29 18:21:35.647001] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:25.914 Executing: test_write_invalid_db 00:08:25.914 Waiting for AER completion... 00:08:25.914 Failure: test_write_invalid_db 00:08:25.914 00:08:25.914 Executing: test_invalid_db_write_overflow_sq 00:08:25.914 Waiting for AER completion... 00:08:25.914 Failure: test_invalid_db_write_overflow_sq 00:08:25.914 00:08:25.914 Executing: test_invalid_db_write_overflow_cq 00:08:25.914 Waiting for AER completion... 00:08:25.914 Failure: test_invalid_db_write_overflow_cq 00:08:25.914 00:08:25.914 18:21:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.914 18:21:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:25.914 [2024-11-29 18:21:45.684946] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 Executing: test_write_invalid_db 00:08:35.882 Waiting for AER completion... 00:08:35.882 Failure: test_write_invalid_db 00:08:35.882 00:08:35.882 Executing: test_invalid_db_write_overflow_sq 00:08:35.882 Waiting for AER completion... 00:08:35.882 Failure: test_invalid_db_write_overflow_sq 00:08:35.882 00:08:35.882 Executing: test_invalid_db_write_overflow_cq 00:08:35.882 Waiting for AER completion... 00:08:35.882 Failure: test_invalid_db_write_overflow_cq 00:08:35.882 00:08:35.882 00:08:35.882 real 0m40.203s 00:08:35.882 user 0m34.273s 00:08:35.882 sys 0m5.565s 00:08:35.882 18:21:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.882 ************************************ 00:08:35.882 END TEST nvme_doorbell_aers 00:08:35.882 ************************************ 00:08:35.882 18:21:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 18:21:55 nvme -- nvme/nvme.sh@97 -- # uname 00:08:35.882 18:21:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:35.882 18:21:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:35.882 18:21:55 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:35.882 18:21:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.882 18:21:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 ************************************ 00:08:35.882 START TEST nvme_multi_aen 00:08:35.882 ************************************ 00:08:35.882 18:21:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:35.882 [2024-11-29 18:21:55.726772] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.726825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.726836] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.727851] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.727870] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.727876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.728700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.728722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.728729] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.729700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.729749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 [2024-11-29 18:21:55.729811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75113) is not found. Dropping the request. 00:08:35.882 Child process pid: 75639 00:08:36.141 [Child] Asynchronous Event Request test 00:08:36.141 [Child] Attached to 0000:00:11.0 00:08:36.141 [Child] Attached to 0000:00:13.0 00:08:36.141 [Child] Attached to 0000:00:10.0 00:08:36.141 [Child] Attached to 0000:00:12.0 00:08:36.141 [Child] Registering asynchronous event callbacks... 00:08:36.141 [Child] Getting orig temperature thresholds of all controllers 00:08:36.141 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:36.141 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 [Child] Cleaning up... 00:08:36.141 Asynchronous Event Request test 00:08:36.141 Attached to 0000:00:11.0 00:08:36.141 Attached to 0000:00:13.0 00:08:36.141 Attached to 0000:00:10.0 00:08:36.141 Attached to 0000:00:12.0 00:08:36.141 Reset controller to setup AER completions for this process 00:08:36.141 Registering asynchronous event callbacks... 00:08:36.141 Getting orig temperature thresholds of all controllers 00:08:36.141 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.141 Setting all controllers temperature threshold low to trigger AER 00:08:36.141 Waiting for all controllers temperature threshold to be set lower 00:08:36.141 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:36.141 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:36.141 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:36.141 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.141 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:36.141 Waiting for all controllers to trigger AER and reset threshold 00:08:36.141 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.141 Cleaning up... 00:08:36.141 00:08:36.141 real 0m0.411s 00:08:36.141 user 0m0.135s 00:08:36.141 sys 0m0.163s 00:08:36.142 18:21:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.142 18:21:55 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:36.142 ************************************ 00:08:36.142 END TEST nvme_multi_aen 00:08:36.142 ************************************ 00:08:36.142 18:21:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:36.142 18:21:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:36.142 18:21:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.142 18:21:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.142 ************************************ 00:08:36.142 START TEST nvme_startup 00:08:36.142 ************************************ 00:08:36.142 18:21:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:36.400 Initializing NVMe Controllers 00:08:36.400 Attached to 0000:00:11.0 00:08:36.400 Attached to 0000:00:13.0 00:08:36.400 Attached to 0000:00:10.0 00:08:36.400 Attached to 0000:00:12.0 00:08:36.400 Initialization complete. 00:08:36.400 Time used:140074.312 (us). 00:08:36.400 00:08:36.400 real 0m0.196s 00:08:36.400 user 0m0.068s 00:08:36.400 sys 0m0.080s 00:08:36.400 18:21:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.400 18:21:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:36.400 ************************************ 00:08:36.400 END TEST nvme_startup 00:08:36.400 ************************************ 00:08:36.400 18:21:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:36.400 18:21:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.400 18:21:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.400 18:21:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.400 ************************************ 00:08:36.400 START TEST nvme_multi_secondary 00:08:36.400 ************************************ 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75689 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75690 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:36.400 18:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:39.686 Initializing NVMe Controllers 00:08:39.686 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.686 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.686 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.687 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.687 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:39.687 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:39.687 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:39.687 Initialization complete. Launching workers. 00:08:39.687 ======================================================== 00:08:39.687 Latency(us) 00:08:39.687 Device Information : IOPS MiB/s Average min max 00:08:39.687 PCIE (0000:00:11.0) NSID 1 from core 2: 3252.82 12.71 4918.44 863.20 12748.93 00:08:39.687 PCIE (0000:00:13.0) NSID 1 from core 2: 3252.82 12.71 4918.49 872.19 12205.01 00:08:39.687 PCIE (0000:00:10.0) NSID 1 from core 2: 3252.82 12.71 4917.49 846.88 12657.52 00:08:39.687 PCIE (0000:00:12.0) NSID 1 from core 2: 3252.82 12.71 4918.89 880.94 12360.72 00:08:39.687 PCIE (0000:00:12.0) NSID 2 from core 2: 3252.82 12.71 4919.48 880.79 12433.08 00:08:39.687 PCIE (0000:00:12.0) NSID 3 from core 2: 3252.82 12.71 4919.53 879.67 13541.80 00:08:39.687 ======================================================== 00:08:39.687 Total : 19516.92 76.24 4918.72 846.88 13541.80 00:08:39.687 00:08:39.687 18:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75689 00:08:39.687 Initializing NVMe Controllers 00:08:39.687 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.687 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.687 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.687 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.687 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:39.687 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:39.687 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:39.687 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:39.687 Initialization complete. Launching workers. 00:08:39.687 ======================================================== 00:08:39.687 Latency(us) 00:08:39.687 Device Information : IOPS MiB/s Average min max 00:08:39.687 PCIE (0000:00:11.0) NSID 1 from core 1: 7747.76 30.26 2064.65 1037.18 6001.67 00:08:39.687 PCIE (0000:00:13.0) NSID 1 from core 1: 7747.76 30.26 2064.63 922.02 5985.98 00:08:39.687 PCIE (0000:00:10.0) NSID 1 from core 1: 7747.76 30.26 2063.65 1073.97 5963.69 00:08:39.687 PCIE (0000:00:12.0) NSID 1 from core 1: 7747.76 30.26 2064.56 1065.58 6174.65 00:08:39.687 PCIE (0000:00:12.0) NSID 2 from core 1: 7747.76 30.26 2064.52 968.64 6031.80 00:08:39.687 PCIE (0000:00:12.0) NSID 3 from core 1: 7747.76 30.26 2064.50 887.95 5406.57 00:08:39.687 ======================================================== 00:08:39.687 Total : 46486.55 181.59 2064.42 887.95 6174.65 00:08:39.687 00:08:42.215 Initializing NVMe Controllers 00:08:42.215 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.215 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.215 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.215 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.215 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:42.215 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:42.215 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:42.215 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:42.215 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:42.215 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:42.215 Initialization complete. Launching workers. 00:08:42.215 ======================================================== 00:08:42.215 Latency(us) 00:08:42.215 Device Information : IOPS MiB/s Average min max 00:08:42.215 PCIE (0000:00:11.0) NSID 1 from core 0: 11030.19 43.09 1450.19 699.18 5104.18 00:08:42.215 PCIE (0000:00:13.0) NSID 1 from core 0: 11030.19 43.09 1450.18 698.73 5411.64 00:08:42.215 PCIE (0000:00:10.0) NSID 1 from core 0: 11030.19 43.09 1449.32 678.52 5268.07 00:08:42.215 PCIE (0000:00:12.0) NSID 1 from core 0: 11030.19 43.09 1450.13 638.84 5622.65 00:08:42.215 PCIE (0000:00:12.0) NSID 2 from core 0: 11030.19 43.09 1450.11 465.16 5737.19 00:08:42.216 PCIE (0000:00:12.0) NSID 3 from core 0: 11030.19 43.09 1450.06 401.82 5493.51 00:08:42.216 ======================================================== 00:08:42.216 Total : 66181.14 258.52 1450.00 401.82 5737.19 00:08:42.216 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75690 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75759 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75760 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:42.216 18:22:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:45.497 Initializing NVMe Controllers 00:08:45.497 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.497 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.497 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.497 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:45.497 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:45.497 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:45.497 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:45.497 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:45.497 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:45.497 Initialization complete. Launching workers. 00:08:45.497 ======================================================== 00:08:45.497 Latency(us) 00:08:45.497 Device Information : IOPS MiB/s Average min max 00:08:45.497 PCIE (0000:00:11.0) NSID 1 from core 0: 7795.29 30.45 2052.09 740.20 6490.54 00:08:45.497 PCIE (0000:00:13.0) NSID 1 from core 0: 7795.29 30.45 2052.07 732.22 5803.21 00:08:45.497 PCIE (0000:00:10.0) NSID 1 from core 0: 7795.29 30.45 2051.12 708.53 5640.93 00:08:45.497 PCIE (0000:00:12.0) NSID 1 from core 0: 7795.29 30.45 2052.04 730.17 6189.08 00:08:45.497 PCIE (0000:00:12.0) NSID 2 from core 0: 7795.29 30.45 2052.03 738.28 6289.18 00:08:45.497 PCIE (0000:00:12.0) NSID 3 from core 0: 7795.29 30.45 2052.10 736.34 6117.48 00:08:45.497 ======================================================== 00:08:45.497 Total : 46771.71 182.70 2051.91 708.53 6490.54 00:08:45.497 00:08:45.497 Initializing NVMe Controllers 00:08:45.497 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.497 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:45.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:45.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:45.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:45.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:45.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:45.498 Initialization complete. Launching workers. 00:08:45.498 ======================================================== 00:08:45.498 Latency(us) 00:08:45.498 Device Information : IOPS MiB/s Average min max 00:08:45.498 PCIE (0000:00:11.0) NSID 1 from core 1: 7560.76 29.53 2115.72 738.29 6404.58 00:08:45.498 PCIE (0000:00:13.0) NSID 1 from core 1: 7560.76 29.53 2115.68 746.90 6072.40 00:08:45.498 PCIE (0000:00:10.0) NSID 1 from core 1: 7560.76 29.53 2114.62 685.45 6234.71 00:08:45.498 PCIE (0000:00:12.0) NSID 1 from core 1: 7560.76 29.53 2115.52 630.43 6372.37 00:08:45.498 PCIE (0000:00:12.0) NSID 2 from core 1: 7560.76 29.53 2115.42 464.97 6393.35 00:08:45.498 PCIE (0000:00:12.0) NSID 3 from core 1: 7560.76 29.53 2115.35 397.68 6300.30 00:08:45.498 ======================================================== 00:08:45.498 Total : 45364.54 177.21 2115.38 397.68 6404.58 00:08:45.498 00:08:46.873 Initializing NVMe Controllers 00:08:46.873 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.873 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.873 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.873 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.873 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:46.873 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:46.873 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:46.873 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:46.873 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:46.873 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:46.873 Initialization complete. Launching workers. 00:08:46.873 ======================================================== 00:08:46.873 Latency(us) 00:08:46.873 Device Information : IOPS MiB/s Average min max 00:08:46.873 PCIE (0000:00:11.0) NSID 1 from core 2: 4630.16 18.09 3454.94 731.71 14347.79 00:08:46.873 PCIE (0000:00:13.0) NSID 1 from core 2: 4630.16 18.09 3455.32 739.29 13934.19 00:08:46.873 PCIE (0000:00:10.0) NSID 1 from core 2: 4630.16 18.09 3453.50 727.20 13371.39 00:08:46.873 PCIE (0000:00:12.0) NSID 1 from core 2: 4630.16 18.09 3454.98 743.28 12918.29 00:08:46.873 PCIE (0000:00:12.0) NSID 2 from core 2: 4630.16 18.09 3454.99 728.05 12790.20 00:08:46.873 PCIE (0000:00:12.0) NSID 3 from core 2: 4630.16 18.09 3455.35 731.58 12903.61 00:08:46.873 ======================================================== 00:08:46.873 Total : 27780.94 108.52 3454.85 727.20 14347.79 00:08:46.873 00:08:47.133 ************************************ 00:08:47.133 END TEST nvme_multi_secondary 00:08:47.133 ************************************ 00:08:47.133 18:22:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75759 00:08:47.133 18:22:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75760 00:08:47.133 00:08:47.133 real 0m10.543s 00:08:47.133 user 0m18.334s 00:08:47.133 sys 0m0.575s 00:08:47.133 18:22:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.133 18:22:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 18:22:06 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:47.133 18:22:06 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74721 ]] 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1094 -- # kill 74721 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1095 -- # wait 74721 00:08:47.133 [2024-11-29 18:22:06.819900] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.819975] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.819992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.820011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.820666] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.820745] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.820771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.820799] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.821572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.821644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.821672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.821707] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.822440] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.822542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.822569] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 [2024-11-29 18:22:06.822596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75635) is not found. Dropping the request. 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:47.133 18:22:06 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.133 18:22:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 ************************************ 00:08:47.133 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:47.133 ************************************ 00:08:47.133 18:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:47.133 * Looking for test storage... 00:08:47.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:47.133 18:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.133 18:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.133 18:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.133 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.134 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.394 --rc genhtml_branch_coverage=1 00:08:47.394 --rc genhtml_function_coverage=1 00:08:47.394 --rc genhtml_legend=1 00:08:47.394 --rc geninfo_all_blocks=1 00:08:47.394 --rc geninfo_unexecuted_blocks=1 00:08:47.394 00:08:47.394 ' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.394 --rc genhtml_branch_coverage=1 00:08:47.394 --rc genhtml_function_coverage=1 00:08:47.394 --rc genhtml_legend=1 00:08:47.394 --rc geninfo_all_blocks=1 00:08:47.394 --rc geninfo_unexecuted_blocks=1 00:08:47.394 00:08:47.394 ' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.394 --rc genhtml_branch_coverage=1 00:08:47.394 --rc genhtml_function_coverage=1 00:08:47.394 --rc genhtml_legend=1 00:08:47.394 --rc geninfo_all_blocks=1 00:08:47.394 --rc geninfo_unexecuted_blocks=1 00:08:47.394 00:08:47.394 ' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.394 --rc genhtml_branch_coverage=1 00:08:47.394 --rc genhtml_function_coverage=1 00:08:47.394 --rc genhtml_legend=1 00:08:47.394 --rc geninfo_all_blocks=1 00:08:47.394 --rc geninfo_unexecuted_blocks=1 00:08:47.394 00:08:47.394 ' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:47.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75921 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75921 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75921 ']' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.394 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 [2024-11-29 18:22:07.173098] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:47.395 [2024-11-29 18:22:07.173368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75921 ] 00:08:47.654 [2024-11-29 18:22:07.341725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.654 [2024-11-29 18:22:07.363038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.654 [2024-11-29 18:22:07.363319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.654 [2024-11-29 18:22:07.363426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.654 [2024-11-29 18:22:07.363553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.221 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.221 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:48.221 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:48.221 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.221 18:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.221 nvme0n1 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_CKvqY.txt 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.221 true 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732904528 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75944 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:48.221 18:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.748 [2024-11-29 18:22:10.094961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:50.748 [2024-11-29 18:22:10.095278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:50.748 [2024-11-29 18:22:10.095303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:50.748 [2024-11-29 18:22:10.095319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.748 [2024-11-29 18:22:10.097220] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:50.748 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75944 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75944 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75944 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_CKvqY.txt 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_CKvqY.txt 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75921 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75921 ']' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75921 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75921 00:08:50.748 killing process with pid 75921 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75921' 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75921 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75921 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:50.748 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:50.748 00:08:50.749 real 0m3.633s 00:08:50.749 user 0m12.937s 00:08:50.749 sys 0m0.478s 00:08:50.749 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.749 ************************************ 00:08:50.749 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:50.749 ************************************ 00:08:50.749 18:22:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.749 18:22:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:50.749 18:22:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:50.749 18:22:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.749 18:22:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.749 18:22:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.749 ************************************ 00:08:50.749 START TEST nvme_fio 00:08:50.749 ************************************ 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.749 18:22:10 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.749 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.009 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:51.009 18:22:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.269 18:22:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.269 18:22:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.269 18:22:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.528 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.528 fio-3.35 00:08:51.528 Starting 1 thread 00:08:58.162 00:08:58.162 test: (groupid=0, jobs=1): err= 0: pid=76067: Fri Nov 29 18:22:17 2024 00:08:58.162 read: IOPS=24.6k, BW=96.1MiB/s (101MB/s)(192MiB/2001msec) 00:08:58.162 slat (nsec): min=3394, max=66980, avg=4859.38, stdev=1938.64 00:08:58.162 clat (usec): min=258, max=12722, avg=2598.85, stdev=755.34 00:08:58.162 lat (usec): min=262, max=12776, avg=2603.71, stdev=756.58 00:08:58.162 clat percentiles (usec): 00:08:58.162 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:58.162 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:58.162 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 4047], 00:08:58.162 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7570], 99.95th=[ 9241], 00:08:58.162 | 99.99th=[12387] 00:08:58.162 bw ( KiB/s): min=93552, max=103192, per=100.00%, avg=98424.00, stdev=4820.84, samples=3 00:08:58.162 iops : min=23388, max=25798, avg=24606.00, stdev=1205.21, samples=3 00:08:58.162 write: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(191MiB/2001msec); 0 zone resets 00:08:58.162 slat (nsec): min=3449, max=71896, avg=5130.73, stdev=1951.92 00:08:58.162 clat (usec): min=225, max=12514, avg=2604.05, stdev=757.99 00:08:58.162 lat (usec): min=230, max=12529, avg=2609.18, stdev=759.23 00:08:58.162 clat percentiles (usec): 00:08:58.162 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:58.162 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:58.162 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 4015], 00:08:58.162 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7635], 99.95th=[ 9634], 00:08:58.162 | 99.99th=[12125] 00:08:58.162 bw ( KiB/s): min=93312, max=102896, per=100.00%, avg=98466.67, stdev=4833.00, samples=3 00:08:58.162 iops : min=23328, max=25724, avg=24616.67, stdev=1208.25, samples=3 00:08:58.162 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.05% 00:08:58.162 lat (msec) : 2=2.46%, 4=92.39%, 10=5.03%, 20=0.04% 00:08:58.162 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=625 00:08:58.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:58.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:58.162 issued rwts: total=49213,48894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:58.162 00:08:58.162 Run status group 0 (all jobs): 00:08:58.162 READ: bw=96.1MiB/s (101MB/s), 96.1MiB/s-96.1MiB/s (101MB/s-101MB/s), io=192MiB (202MB), run=2001-2001msec 00:08:58.162 WRITE: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=191MiB (200MB), run=2001-2001msec 00:08:58.162 ----------------------------------------------------- 00:08:58.162 Suppressions used: 00:08:58.162 count bytes template 00:08:58.163 1 32 /usr/src/fio/parse.c 00:08:58.163 1 8 libtcmalloc_minimal.so 00:08:58.163 ----------------------------------------------------- 00:08:58.163 00:08:58.163 18:22:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:58.163 18:22:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:58.163 18:22:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:58.163 18:22:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.163 18:22:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:58.163 18:22:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.424 18:22:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.424 18:22:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.424 18:22:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.684 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:58.684 fio-3.35 00:08:58.684 Starting 1 thread 00:09:05.266 00:09:05.266 test: (groupid=0, jobs=1): err= 0: pid=76123: Fri Nov 29 18:22:24 2024 00:09:05.266 read: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2001msec) 00:09:05.266 slat (nsec): min=3907, max=55283, avg=6010.82, stdev=2699.07 00:09:05.266 clat (usec): min=232, max=11791, avg=3047.05, stdev=1003.16 00:09:05.266 lat (usec): min=237, max=11843, avg=3053.06, stdev=1004.80 00:09:05.266 clat percentiles (usec): 00:09:05.266 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:05.266 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:09:05.266 | 70.00th=[ 2802], 80.00th=[ 2999], 90.00th=[ 4424], 95.00th=[ 5932], 00:09:05.266 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[ 8848], 00:09:05.266 | 99.99th=[11469] 00:09:05.266 bw ( KiB/s): min=79016, max=90304, per=99.96%, avg=83904.00, stdev=5793.91, samples=3 00:09:05.266 iops : min=19754, max=22576, avg=20976.00, stdev=1448.48, samples=3 00:09:05.266 write: IOPS=20.9k, BW=81.5MiB/s (85.5MB/s)(163MiB/2001msec); 0 zone resets 00:09:05.266 slat (nsec): min=4174, max=80686, avg=6476.96, stdev=2796.23 00:09:05.266 clat (usec): min=215, max=11566, avg=3047.64, stdev=1000.37 00:09:05.266 lat (usec): min=221, max=11578, avg=3054.11, stdev=1002.06 00:09:05.266 clat percentiles (usec): 00:09:05.266 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:05.266 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:09:05.266 | 70.00th=[ 2802], 80.00th=[ 2966], 90.00th=[ 4424], 95.00th=[ 5932], 00:09:05.266 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 8029], 99.95th=[ 9110], 00:09:05.266 | 99.99th=[11076] 00:09:05.266 bw ( KiB/s): min=78848, max=90416, per=100.00%, avg=84005.33, stdev=5884.96, samples=3 00:09:05.266 iops : min=19712, max=22604, avg=21001.33, stdev=1471.24, samples=3 00:09:05.266 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:05.266 lat (msec) : 2=0.37%, 4=88.21%, 10=11.34%, 20=0.03% 00:09:05.266 cpu : usr=99.10%, sys=0.10%, ctx=5, majf=0, minf=625 00:09:05.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.266 issued rwts: total=41990,41764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.266 00:09:05.266 Run status group 0 (all jobs): 00:09:05.266 READ: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:05.266 WRITE: bw=81.5MiB/s (85.5MB/s), 81.5MiB/s-81.5MiB/s (85.5MB/s-85.5MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:05.266 ----------------------------------------------------- 00:09:05.266 Suppressions used: 00:09:05.266 count bytes template 00:09:05.266 1 32 /usr/src/fio/parse.c 00:09:05.266 1 8 libtcmalloc_minimal.so 00:09:05.266 ----------------------------------------------------- 00:09:05.266 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:05.266 18:22:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:05.266 18:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.266 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:05.266 fio-3.35 00:09:05.266 Starting 1 thread 00:09:11.855 00:09:11.855 test: (groupid=0, jobs=1): err= 0: pid=76184: Fri Nov 29 18:22:31 2024 00:09:11.855 read: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(174MiB/2001msec) 00:09:11.855 slat (nsec): min=4181, max=59320, avg=5146.74, stdev=2147.34 00:09:11.855 clat (usec): min=518, max=9443, avg=2868.09, stdev=827.30 00:09:11.855 lat (usec): min=523, max=9490, avg=2873.24, stdev=828.63 00:09:11.855 clat percentiles (usec): 00:09:11.855 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:11.855 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:11.855 | 70.00th=[ 2802], 80.00th=[ 3130], 90.00th=[ 3621], 95.00th=[ 4621], 00:09:11.855 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 8029], 00:09:11.855 | 99.99th=[ 9372] 00:09:11.855 bw ( KiB/s): min=87048, max=98280, per=100.00%, avg=90882.67, stdev=6407.72, samples=3 00:09:11.855 iops : min=21762, max=24570, avg=22720.67, stdev=1601.93, samples=3 00:09:11.855 write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets 00:09:11.855 slat (usec): min=4, max=122, avg= 5.40, stdev= 2.26 00:09:11.855 clat (usec): min=286, max=9370, avg=2867.04, stdev=821.74 00:09:11.855 lat (usec): min=292, max=9385, avg=2872.44, stdev=823.05 00:09:11.855 clat percentiles (usec): 00:09:11.855 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:11.855 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:11.855 | 70.00th=[ 2802], 80.00th=[ 3130], 90.00th=[ 3589], 95.00th=[ 4555], 00:09:11.855 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 8291], 00:09:11.855 | 99.99th=[ 9241] 00:09:11.855 bw ( KiB/s): min=86976, max=97408, per=100.00%, avg=91061.33, stdev=5571.52, samples=3 00:09:11.855 iops : min=21744, max=24352, avg=22765.33, stdev=1392.88, samples=3 00:09:11.855 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:11.855 lat (msec) : 2=0.58%, 4=91.76%, 10=7.63% 00:09:11.855 cpu : usr=99.05%, sys=0.20%, ctx=4, majf=0, minf=626 00:09:11.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.855 issued rwts: total=44646,44358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.855 00:09:11.855 Run status group 0 (all jobs): 00:09:11.855 READ: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=174MiB (183MB), run=2001-2001msec 00:09:11.855 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:09:11.855 ----------------------------------------------------- 00:09:11.855 Suppressions used: 00:09:11.856 count bytes template 00:09:11.856 1 32 /usr/src/fio/parse.c 00:09:11.856 1 8 libtcmalloc_minimal.so 00:09:11.856 ----------------------------------------------------- 00:09:11.856 00:09:11.856 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:11.856 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.856 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.856 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.117 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.117 18:22:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.376 18:22:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.376 18:22:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.376 18:22:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.376 fio-3.35 00:09:12.376 Starting 1 thread 00:09:18.950 00:09:18.950 test: (groupid=0, jobs=1): err= 0: pid=76245: Fri Nov 29 18:22:37 2024 00:09:18.950 read: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec) 00:09:18.950 slat (nsec): min=3347, max=71280, avg=5096.25, stdev=2415.26 00:09:18.950 clat (usec): min=205, max=13194, avg=2965.93, stdev=1062.65 00:09:18.950 lat (usec): min=210, max=13265, avg=2971.03, stdev=1063.81 00:09:18.950 clat percentiles (usec): 00:09:18.950 | 1.00th=[ 1762], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:18.950 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2704], 00:09:18.950 | 70.00th=[ 2900], 80.00th=[ 3294], 90.00th=[ 4424], 95.00th=[ 5276], 00:09:18.950 | 99.00th=[ 7046], 99.50th=[ 8160], 99.90th=[ 8979], 99.95th=[ 9634], 00:09:18.950 | 99.99th=[12780] 00:09:18.950 bw ( KiB/s): min=76760, max=89464, per=97.64%, avg=84069.33, stdev=6564.86, samples=3 00:09:18.951 iops : min=19190, max=22366, avg=21017.33, stdev=1641.21, samples=3 00:09:18.951 write: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:09:18.951 slat (nsec): min=3471, max=48220, avg=5307.72, stdev=2323.87 00:09:18.951 clat (usec): min=237, max=12931, avg=2982.75, stdev=1070.32 00:09:18.951 lat (usec): min=241, max=12944, avg=2988.06, stdev=1071.47 00:09:18.951 clat percentiles (usec): 00:09:18.951 | 1.00th=[ 1795], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:18.951 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:18.951 | 70.00th=[ 2900], 80.00th=[ 3294], 90.00th=[ 4424], 95.00th=[ 5342], 00:09:18.951 | 99.00th=[ 7111], 99.50th=[ 8225], 99.90th=[ 8979], 99.95th=[ 9896], 00:09:18.951 | 99.99th=[12387] 00:09:18.951 bw ( KiB/s): min=76624, max=89448, per=98.46%, avg=84146.67, stdev=6694.36, samples=3 00:09:18.951 iops : min=19156, max=22362, avg=21036.67, stdev=1673.59, samples=3 00:09:18.951 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:18.951 lat (msec) : 2=2.68%, 4=84.20%, 10=13.02%, 20=0.05% 00:09:18.951 cpu : usr=99.10%, sys=0.05%, ctx=2, majf=0, minf=625 00:09:18.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:18.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.951 issued rwts: total=43072,42751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.951 00:09:18.951 Run status group 0 (all jobs): 00:09:18.951 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:18.951 WRITE: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:18.951 ----------------------------------------------------- 00:09:18.951 Suppressions used: 00:09:18.951 count bytes template 00:09:18.951 1 32 /usr/src/fio/parse.c 00:09:18.951 1 8 libtcmalloc_minimal.so 00:09:18.951 ----------------------------------------------------- 00:09:18.951 00:09:18.951 18:22:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:18.951 18:22:37 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:18.951 00:09:18.951 real 0m27.188s 00:09:18.951 user 0m18.063s 00:09:18.951 sys 0m15.891s 00:09:18.951 18:22:37 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.951 18:22:37 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:18.951 ************************************ 00:09:18.951 END TEST nvme_fio 00:09:18.951 ************************************ 00:09:18.951 ************************************ 00:09:18.951 END TEST nvme 00:09:18.951 ************************************ 00:09:18.951 00:09:18.951 real 1m34.781s 00:09:18.951 user 3m33.795s 00:09:18.951 sys 0m25.953s 00:09:18.951 18:22:37 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.951 18:22:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.951 18:22:37 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:18.951 18:22:37 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:18.951 18:22:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.951 18:22:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.951 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:18.951 ************************************ 00:09:18.951 START TEST nvme_scc 00:09:18.951 ************************************ 00:09:18.951 18:22:37 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:18.951 * Looking for test storage... 00:09:18.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.951 18:22:37 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.951 18:22:37 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.951 18:22:37 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.951 18:22:37 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.951 18:22:37 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:18.951 18:22:38 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.951 18:22:38 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.951 --rc genhtml_branch_coverage=1 00:09:18.951 --rc genhtml_function_coverage=1 00:09:18.951 --rc genhtml_legend=1 00:09:18.951 --rc geninfo_all_blocks=1 00:09:18.951 --rc geninfo_unexecuted_blocks=1 00:09:18.951 00:09:18.951 ' 00:09:18.951 18:22:38 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.951 --rc genhtml_branch_coverage=1 00:09:18.951 --rc genhtml_function_coverage=1 00:09:18.951 --rc genhtml_legend=1 00:09:18.951 --rc geninfo_all_blocks=1 00:09:18.951 --rc geninfo_unexecuted_blocks=1 00:09:18.951 00:09:18.951 ' 00:09:18.951 18:22:38 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.951 --rc genhtml_branch_coverage=1 00:09:18.951 --rc genhtml_function_coverage=1 00:09:18.951 --rc genhtml_legend=1 00:09:18.951 --rc geninfo_all_blocks=1 00:09:18.951 --rc geninfo_unexecuted_blocks=1 00:09:18.951 00:09:18.951 ' 00:09:18.951 18:22:38 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.951 --rc genhtml_branch_coverage=1 00:09:18.951 --rc genhtml_function_coverage=1 00:09:18.951 --rc genhtml_legend=1 00:09:18.951 --rc geninfo_all_blocks=1 00:09:18.951 --rc geninfo_unexecuted_blocks=1 00:09:18.951 00:09:18.951 ' 00:09:18.951 18:22:38 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.951 18:22:38 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.951 18:22:38 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.951 18:22:38 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.951 18:22:38 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.951 18:22:38 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:18.951 18:22:38 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:18.951 18:22:38 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:18.952 18:22:38 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:18.952 18:22:38 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:18.952 18:22:38 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:18.952 18:22:38 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:18.952 18:22:38 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:18.952 18:22:38 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.952 18:22:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:18.952 18:22:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:18.952 18:22:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:18.952 18:22:38 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.952 Waiting for block devices as requested 00:09:18.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.952 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.952 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.255 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.255 18:22:43 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.255 18:22:43 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.255 18:22:43 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.255 18:22:43 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.255 18:22:43 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.255 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.256 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.257 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:24.258 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.259 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.260 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.261 18:22:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.262 18:22:43 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.262 18:22:43 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.262 18:22:43 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.262 18:22:43 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.262 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.263 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:24.264 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.265 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.266 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.267 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:24.268 18:22:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.268 18:22:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.268 18:22:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.268 18:22:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:24.268 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.269 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:24.270 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.271 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:24.272 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.273 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.274 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.540 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.541 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.542 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.543 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.544 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.545 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.546 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:24.547 18:22:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.547 18:22:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:24.547 18:22:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.547 18:22:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.547 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:24.548 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.549 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:24.550 18:22:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:24.550 18:22:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:24.550 18:22:44 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.709 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.709 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.709 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.709 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.709 18:22:45 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.709 18:22:45 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.709 18:22:45 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.709 18:22:45 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.709 ************************************ 00:09:25.709 START TEST nvme_simple_copy 00:09:25.709 ************************************ 00:09:25.709 18:22:45 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.969 Initializing NVMe Controllers 00:09:25.969 Attaching to 0000:00:10.0 00:09:25.969 Controller supports SCC. Attached to 0000:00:10.0 00:09:25.969 Namespace ID: 1 size: 6GB 00:09:25.969 Initialization complete. 00:09:25.969 00:09:25.969 Controller QEMU NVMe Ctrl (12340 ) 00:09:25.969 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:25.969 Namespace Block Size:4096 00:09:25.969 Writing LBAs 0 to 63 with Random Data 00:09:25.969 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:25.969 LBAs matching Written Data: 64 00:09:25.969 ************************************ 00:09:25.969 END TEST nvme_simple_copy 00:09:25.969 ************************************ 00:09:25.969 00:09:25.969 real 0m0.275s 00:09:25.969 user 0m0.105s 00:09:25.969 sys 0m0.068s 00:09:25.969 18:22:45 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.969 18:22:45 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.969 ************************************ 00:09:25.969 END TEST nvme_scc 00:09:25.969 ************************************ 00:09:25.969 00:09:25.969 real 0m7.957s 00:09:25.969 user 0m1.119s 00:09:25.969 sys 0m1.332s 00:09:25.969 18:22:45 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.969 18:22:45 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.969 18:22:45 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:25.969 18:22:45 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:25.969 18:22:45 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:25.969 18:22:45 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:25.969 18:22:45 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:25.969 18:22:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.969 18:22:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.969 18:22:45 -- common/autotest_common.sh@10 -- # set +x 00:09:26.230 ************************************ 00:09:26.230 START TEST nvme_fdp 00:09:26.230 ************************************ 00:09:26.230 18:22:45 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:26.230 * Looking for test storage... 00:09:26.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.230 18:22:45 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.230 18:22:45 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.230 18:22:45 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.230 --rc genhtml_branch_coverage=1 00:09:26.230 --rc genhtml_function_coverage=1 00:09:26.230 --rc genhtml_legend=1 00:09:26.230 --rc geninfo_all_blocks=1 00:09:26.230 --rc geninfo_unexecuted_blocks=1 00:09:26.230 00:09:26.230 ' 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.230 --rc genhtml_branch_coverage=1 00:09:26.230 --rc genhtml_function_coverage=1 00:09:26.230 --rc genhtml_legend=1 00:09:26.230 --rc geninfo_all_blocks=1 00:09:26.230 --rc geninfo_unexecuted_blocks=1 00:09:26.230 00:09:26.230 ' 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.230 --rc genhtml_branch_coverage=1 00:09:26.230 --rc genhtml_function_coverage=1 00:09:26.230 --rc genhtml_legend=1 00:09:26.230 --rc geninfo_all_blocks=1 00:09:26.230 --rc geninfo_unexecuted_blocks=1 00:09:26.230 00:09:26.230 ' 00:09:26.230 18:22:46 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.230 --rc genhtml_branch_coverage=1 00:09:26.230 --rc genhtml_function_coverage=1 00:09:26.230 --rc genhtml_legend=1 00:09:26.230 --rc geninfo_all_blocks=1 00:09:26.230 --rc geninfo_unexecuted_blocks=1 00:09:26.230 00:09:26.230 ' 00:09:26.230 18:22:46 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.230 18:22:46 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.230 18:22:46 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.230 18:22:46 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.230 18:22:46 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.230 18:22:46 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:26.230 18:22:46 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:26.230 18:22:46 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:26.230 18:22:46 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.230 18:22:46 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:26.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.752 Waiting for block devices as requested 00:09:26.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.752 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.013 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.014 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.315 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.315 18:22:51 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.315 18:22:51 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.315 18:22:51 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.315 18:22:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.315 18:22:51 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.315 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.316 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.317 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:32.318 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.319 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.320 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.321 18:22:51 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.321 18:22:51 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.321 18:22:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.321 18:22:51 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.321 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.322 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.323 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.324 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:32.325 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.326 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.327 18:22:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.327 18:22:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.327 18:22:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.327 18:22:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.327 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.328 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.329 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.330 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:32.331 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.332 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.333 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.334 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.335 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.336 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.337 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.340 18:22:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.340 18:22:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.340 18:22:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.340 18:22:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.601 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.602 18:22:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:32.603 18:22:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:32.603 18:22:52 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:32.603 18:22:52 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.428 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.429 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.429 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.429 18:22:53 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.429 18:22:53 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.429 18:22:53 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.429 18:22:53 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 ************************************ 00:09:33.686 START TEST nvme_flexible_data_placement 00:09:33.686 ************************************ 00:09:33.686 18:22:53 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.686 Initializing NVMe Controllers 00:09:33.686 Attaching to 0000:00:13.0 00:09:33.686 Controller supports FDP Attached to 0000:00:13.0 00:09:33.686 Namespace ID: 1 Endurance Group ID: 1 00:09:33.686 Initialization complete. 00:09:33.686 00:09:33.686 ================================== 00:09:33.686 == FDP tests for Namespace: #01 == 00:09:33.686 ================================== 00:09:33.686 00:09:33.686 Get Feature: FDP: 00:09:33.686 ================= 00:09:33.686 Enabled: Yes 00:09:33.686 FDP configuration Index: 0 00:09:33.686 00:09:33.686 FDP configurations log page 00:09:33.686 =========================== 00:09:33.686 Number of FDP configurations: 1 00:09:33.686 Version: 0 00:09:33.686 Size: 112 00:09:33.686 FDP Configuration Descriptor: 0 00:09:33.686 Descriptor Size: 96 00:09:33.686 Reclaim Group Identifier format: 2 00:09:33.686 FDP Volatile Write Cache: Not Present 00:09:33.686 FDP Configuration: Valid 00:09:33.686 Vendor Specific Size: 0 00:09:33.686 Number of Reclaim Groups: 2 00:09:33.686 Number of Recalim Unit Handles: 8 00:09:33.686 Max Placement Identifiers: 128 00:09:33.686 Number of Namespaces Suppprted: 256 00:09:33.686 Reclaim unit Nominal Size: 6000000 bytes 00:09:33.686 Estimated Reclaim Unit Time Limit: Not Reported 00:09:33.686 RUH Desc #000: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #001: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #002: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #003: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #004: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #005: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #006: RUH Type: Initially Isolated 00:09:33.686 RUH Desc #007: RUH Type: Initially Isolated 00:09:33.686 00:09:33.686 FDP reclaim unit handle usage log page 00:09:33.686 ====================================== 00:09:33.686 Number of Reclaim Unit Handles: 8 00:09:33.686 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:33.686 RUH Usage Desc #001: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #002: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #003: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #004: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #005: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #006: RUH Attributes: Unused 00:09:33.686 RUH Usage Desc #007: RUH Attributes: Unused 00:09:33.686 00:09:33.686 FDP statistics log page 00:09:33.686 ======================= 00:09:33.686 Host bytes with metadata written: 1971494912 00:09:33.686 Media bytes with metadata written: 1972588544 00:09:33.686 Media bytes erased: 0 00:09:33.686 00:09:33.686 FDP Reclaim unit handle status 00:09:33.686 ============================== 00:09:33.686 Number of RUHS descriptors: 2 00:09:33.686 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000027d6 00:09:33.686 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:33.686 00:09:33.686 FDP write on placement id: 0 success 00:09:33.686 00:09:33.686 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:33.686 00:09:33.686 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:33.686 00:09:33.686 Get Feature: FDP Events for Placement handle: #0 00:09:33.686 ======================== 00:09:33.686 Number of FDP Events: 6 00:09:33.686 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:33.686 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:33.686 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:33.686 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:33.686 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:33.686 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:33.686 00:09:33.686 FDP events log page 00:09:33.686 =================== 00:09:33.686 Number of FDP events: 1 00:09:33.686 FDP Event #0: 00:09:33.686 Event Type: RU Not Written to Capacity 00:09:33.686 Placement Identifier: Valid 00:09:33.686 NSID: Valid 00:09:33.686 Location: Valid 00:09:33.686 Placement Identifier: 0 00:09:33.686 Event Timestamp: 4 00:09:33.686 Namespace Identifier: 1 00:09:33.686 Reclaim Group Identifier: 0 00:09:33.686 Reclaim Unit Handle Identifier: 0 00:09:33.686 00:09:33.686 FDP test passed 00:09:33.686 00:09:33.686 real 0m0.224s 00:09:33.686 user 0m0.067s 00:09:33.686 sys 0m0.056s 00:09:33.686 18:22:53 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.686 ************************************ 00:09:33.686 END TEST nvme_flexible_data_placement 00:09:33.686 18:22:53 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 ************************************ 00:09:33.944 00:09:33.944 real 0m7.740s 00:09:33.944 user 0m1.079s 00:09:33.944 sys 0m1.433s 00:09:33.944 18:22:53 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.944 18:22:53 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 ************************************ 00:09:33.944 END TEST nvme_fdp 00:09:33.944 ************************************ 00:09:33.944 18:22:53 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:33.944 18:22:53 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.944 18:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.944 18:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.944 18:22:53 -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 ************************************ 00:09:33.944 START TEST nvme_rpc 00:09:33.944 ************************************ 00:09:33.944 18:22:53 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.944 * Looking for test storage... 00:09:33.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.944 18:22:53 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.944 18:22:53 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.944 18:22:53 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.944 18:22:53 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.944 18:22:53 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.945 18:22:53 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.945 --rc genhtml_branch_coverage=1 00:09:33.945 --rc genhtml_function_coverage=1 00:09:33.945 --rc genhtml_legend=1 00:09:33.945 --rc geninfo_all_blocks=1 00:09:33.945 --rc geninfo_unexecuted_blocks=1 00:09:33.945 00:09:33.945 ' 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.945 --rc genhtml_branch_coverage=1 00:09:33.945 --rc genhtml_function_coverage=1 00:09:33.945 --rc genhtml_legend=1 00:09:33.945 --rc geninfo_all_blocks=1 00:09:33.945 --rc geninfo_unexecuted_blocks=1 00:09:33.945 00:09:33.945 ' 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.945 --rc genhtml_branch_coverage=1 00:09:33.945 --rc genhtml_function_coverage=1 00:09:33.945 --rc genhtml_legend=1 00:09:33.945 --rc geninfo_all_blocks=1 00:09:33.945 --rc geninfo_unexecuted_blocks=1 00:09:33.945 00:09:33.945 ' 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.945 --rc genhtml_branch_coverage=1 00:09:33.945 --rc genhtml_function_coverage=1 00:09:33.945 --rc genhtml_legend=1 00:09:33.945 --rc geninfo_all_blocks=1 00:09:33.945 --rc geninfo_unexecuted_blocks=1 00:09:33.945 00:09:33.945 ' 00:09:33.945 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.945 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.945 18:22:53 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:34.203 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:34.203 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77628 00:09:34.203 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:34.203 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77628 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77628 ']' 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.203 18:22:53 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.203 18:22:53 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:34.203 [2024-11-29 18:22:53.962491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:09:34.203 [2024-11-29 18:22:53.962611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77628 ] 00:09:34.461 [2024-11-29 18:22:54.119504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.461 [2024-11-29 18:22:54.139870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.461 [2024-11-29 18:22:54.139929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.028 18:22:54 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.028 18:22:54 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:35.028 18:22:54 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:35.286 Nvme0n1 00:09:35.286 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:35.286 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:35.544 request: 00:09:35.544 { 00:09:35.544 "bdev_name": "Nvme0n1", 00:09:35.544 "filename": "non_existing_file", 00:09:35.544 "method": "bdev_nvme_apply_firmware", 00:09:35.544 "req_id": 1 00:09:35.544 } 00:09:35.544 Got JSON-RPC error response 00:09:35.544 response: 00:09:35.544 { 00:09:35.544 "code": -32603, 00:09:35.544 "message": "open file failed." 00:09:35.544 } 00:09:35.544 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:35.544 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:35.544 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:35.544 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:35.544 18:22:55 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77628 00:09:35.544 18:22:55 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77628 ']' 00:09:35.544 18:22:55 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77628 00:09:35.544 18:22:55 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77628 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.803 killing process with pid 77628 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77628' 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77628 00:09:35.803 18:22:55 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77628 00:09:36.064 00:09:36.064 real 0m2.057s 00:09:36.064 user 0m4.001s 00:09:36.064 sys 0m0.472s 00:09:36.064 18:22:55 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.064 ************************************ 00:09:36.064 END TEST nvme_rpc 00:09:36.064 18:22:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 ************************************ 00:09:36.064 18:22:55 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:36.064 18:22:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.064 18:22:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.064 18:22:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 ************************************ 00:09:36.064 START TEST nvme_rpc_timeouts 00:09:36.064 ************************************ 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:36.064 * Looking for test storage... 00:09:36.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.064 18:22:55 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.064 --rc genhtml_branch_coverage=1 00:09:36.064 --rc genhtml_function_coverage=1 00:09:36.064 --rc genhtml_legend=1 00:09:36.064 --rc geninfo_all_blocks=1 00:09:36.064 --rc geninfo_unexecuted_blocks=1 00:09:36.064 00:09:36.064 ' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.064 --rc genhtml_branch_coverage=1 00:09:36.064 --rc genhtml_function_coverage=1 00:09:36.064 --rc genhtml_legend=1 00:09:36.064 --rc geninfo_all_blocks=1 00:09:36.064 --rc geninfo_unexecuted_blocks=1 00:09:36.064 00:09:36.064 ' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.064 --rc genhtml_branch_coverage=1 00:09:36.064 --rc genhtml_function_coverage=1 00:09:36.064 --rc genhtml_legend=1 00:09:36.064 --rc geninfo_all_blocks=1 00:09:36.064 --rc geninfo_unexecuted_blocks=1 00:09:36.064 00:09:36.064 ' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.064 --rc genhtml_branch_coverage=1 00:09:36.064 --rc genhtml_function_coverage=1 00:09:36.064 --rc genhtml_legend=1 00:09:36.064 --rc geninfo_all_blocks=1 00:09:36.064 --rc geninfo_unexecuted_blocks=1 00:09:36.064 00:09:36.064 ' 00:09:36.064 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.064 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77682 00:09:36.064 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77682 00:09:36.064 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77714 00:09:36.065 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:36.065 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77714 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77714 ']' 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.065 18:22:55 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:36.065 18:22:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:36.326 [2024-11-29 18:22:56.025760] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:09:36.326 [2024-11-29 18:22:56.025873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77714 ] 00:09:36.326 [2024-11-29 18:22:56.182996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.326 [2024-11-29 18:22:56.213283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.326 [2024-11-29 18:22:56.213402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.304 18:22:56 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.304 Checking default timeout settings: 00:09:37.304 18:22:56 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:37.304 18:22:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:37.304 18:22:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:37.304 Making settings changes with rpc: 00:09:37.304 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:37.564 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:37.564 Check default vs. modified settings: 00:09:37.564 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:37.564 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77682 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77682 00:09:38.136 Setting action_on_timeout is changed as expected. 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77682 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77682 00:09:38.136 Setting timeout_us is changed as expected. 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:38.136 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77682 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77682 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:38.137 Setting timeout_admin_us is changed as expected. 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77682 /tmp/settings_modified_77682 00:09:38.137 18:22:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77714 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77714 ']' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77714 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77714 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.137 killing process with pid 77714 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77714' 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77714 00:09:38.137 18:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77714 00:09:38.398 RPC TIMEOUT SETTING TEST PASSED. 00:09:38.398 18:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:38.398 00:09:38.398 real 0m2.340s 00:09:38.398 user 0m4.670s 00:09:38.398 sys 0m0.554s 00:09:38.398 ************************************ 00:09:38.398 END TEST nvme_rpc_timeouts 00:09:38.398 ************************************ 00:09:38.398 18:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.398 18:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:38.398 18:22:58 -- spdk/autotest.sh@239 -- # uname -s 00:09:38.398 18:22:58 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:38.398 18:22:58 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:38.398 18:22:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.398 18:22:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.398 18:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:38.398 ************************************ 00:09:38.398 START TEST sw_hotplug 00:09:38.398 ************************************ 00:09:38.398 18:22:58 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:38.398 * Looking for test storage... 00:09:38.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.398 18:22:58 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.398 18:22:58 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.660 18:22:58 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.660 --rc genhtml_branch_coverage=1 00:09:38.660 --rc genhtml_function_coverage=1 00:09:38.660 --rc genhtml_legend=1 00:09:38.660 --rc geninfo_all_blocks=1 00:09:38.660 --rc geninfo_unexecuted_blocks=1 00:09:38.660 00:09:38.660 ' 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.660 --rc genhtml_branch_coverage=1 00:09:38.660 --rc genhtml_function_coverage=1 00:09:38.660 --rc genhtml_legend=1 00:09:38.660 --rc geninfo_all_blocks=1 00:09:38.660 --rc geninfo_unexecuted_blocks=1 00:09:38.660 00:09:38.660 ' 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.660 --rc genhtml_branch_coverage=1 00:09:38.660 --rc genhtml_function_coverage=1 00:09:38.660 --rc genhtml_legend=1 00:09:38.660 --rc geninfo_all_blocks=1 00:09:38.660 --rc geninfo_unexecuted_blocks=1 00:09:38.660 00:09:38.660 ' 00:09:38.660 18:22:58 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.660 --rc genhtml_branch_coverage=1 00:09:38.660 --rc genhtml_function_coverage=1 00:09:38.660 --rc genhtml_legend=1 00:09:38.660 --rc geninfo_all_blocks=1 00:09:38.660 --rc geninfo_unexecuted_blocks=1 00:09:38.660 00:09:38.660 ' 00:09:38.660 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.922 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:38.922 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:38.922 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:38.922 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:39.184 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:39.184 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:39.184 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:39.184 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:39.184 18:22:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.185 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:39.185 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:39.185 18:22:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:39.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.706 Waiting for block devices as requested 00:09:39.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.706 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.968 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.260 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:45.260 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:45.260 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.519 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:45.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.519 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:45.779 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:46.038 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.298 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:46.298 18:23:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78562 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:46.298 18:23:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:46.298 18:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:46.298 18:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:46.298 18:23:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:46.298 18:23:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:46.298 18:23:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:46.559 Initializing NVMe Controllers 00:09:46.559 Attaching to 0000:00:10.0 00:09:46.559 Attaching to 0000:00:11.0 00:09:46.559 Attached to 0000:00:10.0 00:09:46.559 Attached to 0000:00:11.0 00:09:46.559 Initialization complete. Starting I/O... 00:09:46.559 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:46.559 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:46.559 00:09:47.525 QEMU NVMe Ctrl (12340 ): 2496 I/Os completed (+2496) 00:09:47.525 QEMU NVMe Ctrl (12341 ): 2496 I/Os completed (+2496) 00:09:47.525 00:09:48.457 QEMU NVMe Ctrl (12340 ): 6120 I/Os completed (+3624) 00:09:48.457 QEMU NVMe Ctrl (12341 ): 6116 I/Os completed (+3620) 00:09:48.457 00:09:49.488 QEMU NVMe Ctrl (12340 ): 9844 I/Os completed (+3724) 00:09:49.488 QEMU NVMe Ctrl (12341 ): 9840 I/Os completed (+3724) 00:09:49.488 00:09:50.424 QEMU NVMe Ctrl (12340 ): 13500 I/Os completed (+3656) 00:09:50.424 QEMU NVMe Ctrl (12341 ): 13499 I/Os completed (+3659) 00:09:50.424 00:09:51.355 QEMU NVMe Ctrl (12340 ): 17208 I/Os completed (+3708) 00:09:51.355 QEMU NVMe Ctrl (12341 ): 17207 I/Os completed (+3708) 00:09:51.355 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.298 [2024-11-29 18:23:12.061064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:52.298 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:52.298 [2024-11-29 18:23:12.061886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.061924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.061937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.061950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:52.298 [2024-11-29 18:23:12.063031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.063060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.063070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.063081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.298 [2024-11-29 18:23:12.082339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:52.298 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:52.298 [2024-11-29 18:23:12.083067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:52.298 [2024-11-29 18:23:12.083906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 [2024-11-29 18:23:12.083944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:52.298 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:52.298 EAL: Scan for (pci) bus failed. 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.298 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:52.560 Attaching to 0000:00:10.0 00:09:52.560 Attached to 0000:00:10.0 00:09:52.560 QEMU NVMe Ctrl (12340 ): 92 I/Os completed (+92) 00:09:52.560 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.560 18:23:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:52.560 Attaching to 0000:00:11.0 00:09:52.560 Attached to 0000:00:11.0 00:09:53.502 QEMU NVMe Ctrl (12340 ): 4406 I/Os completed (+4314) 00:09:53.502 QEMU NVMe Ctrl (12341 ): 4032 I/Os completed (+4032) 00:09:53.502 00:09:54.445 QEMU NVMe Ctrl (12340 ): 9066 I/Os completed (+4660) 00:09:54.445 QEMU NVMe Ctrl (12341 ): 8682 I/Os completed (+4650) 00:09:54.445 00:09:55.390 QEMU NVMe Ctrl (12340 ): 12610 I/Os completed (+3544) 00:09:55.390 QEMU NVMe Ctrl (12341 ): 12330 I/Os completed (+3648) 00:09:55.390 00:09:56.774 QEMU NVMe Ctrl (12340 ): 15786 I/Os completed (+3176) 00:09:56.774 QEMU NVMe Ctrl (12341 ): 15506 I/Os completed (+3176) 00:09:56.774 00:09:57.719 QEMU NVMe Ctrl (12340 ): 19776 I/Os completed (+3990) 00:09:57.719 QEMU NVMe Ctrl (12341 ): 19567 I/Os completed (+4061) 00:09:57.719 00:09:58.661 QEMU NVMe Ctrl (12340 ): 23004 I/Os completed (+3228) 00:09:58.661 QEMU NVMe Ctrl (12341 ): 22811 I/Os completed (+3244) 00:09:58.661 00:09:59.603 QEMU NVMe Ctrl (12340 ): 26002 I/Os completed (+2998) 00:09:59.603 QEMU NVMe Ctrl (12341 ): 25826 I/Os completed (+3015) 00:09:59.603 00:10:00.543 QEMU NVMe Ctrl (12340 ): 29042 I/Os completed (+3040) 00:10:00.543 QEMU NVMe Ctrl (12341 ): 28872 I/Os completed (+3046) 00:10:00.543 00:10:01.485 QEMU NVMe Ctrl (12340 ): 32058 I/Os completed (+3016) 00:10:01.485 QEMU NVMe Ctrl (12341 ): 31887 I/Os completed (+3015) 00:10:01.485 00:10:02.430 QEMU NVMe Ctrl (12340 ): 35066 I/Os completed (+3008) 00:10:02.430 QEMU NVMe Ctrl (12341 ): 34907 I/Os completed (+3020) 00:10:02.430 00:10:03.375 QEMU NVMe Ctrl (12340 ): 38098 I/Os completed (+3032) 00:10:03.375 QEMU NVMe Ctrl (12341 ): 37943 I/Os completed (+3036) 00:10:03.375 00:10:04.763 QEMU NVMe Ctrl (12340 ): 41253 I/Os completed (+3155) 00:10:04.763 QEMU NVMe Ctrl (12341 ): 41104 I/Os completed (+3161) 00:10:04.763 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.763 [2024-11-29 18:23:24.321032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:04.763 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:04.763 [2024-11-29 18:23:24.321842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.321879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.321893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.321907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:04.763 [2024-11-29 18:23:24.323009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.323041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.323052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.323065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.763 [2024-11-29 18:23:24.340886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:04.763 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:04.763 [2024-11-29 18:23:24.341606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.341636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.341650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.341664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:04.763 [2024-11-29 18:23:24.342450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.342488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.342501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 [2024-11-29 18:23:24.342510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.763 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:04.763 EAL: Scan for (pci) bus failed. 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.763 Attaching to 0000:00:10.0 00:10:04.763 Attached to 0000:00:10.0 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.763 18:23:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.763 Attaching to 0000:00:11.0 00:10:04.763 Attached to 0000:00:11.0 00:10:05.713 QEMU NVMe Ctrl (12340 ): 3234 I/Os completed (+3234) 00:10:05.713 QEMU NVMe Ctrl (12341 ): 2857 I/Os completed (+2857) 00:10:05.713 00:10:06.660 QEMU NVMe Ctrl (12340 ): 7544 I/Os completed (+4310) 00:10:06.660 QEMU NVMe Ctrl (12341 ): 7178 I/Os completed (+4321) 00:10:06.660 00:10:07.594 QEMU NVMe Ctrl (12340 ): 11802 I/Os completed (+4258) 00:10:07.594 QEMU NVMe Ctrl (12341 ): 11442 I/Os completed (+4264) 00:10:07.594 00:10:08.529 QEMU NVMe Ctrl (12340 ): 15597 I/Os completed (+3795) 00:10:08.529 QEMU NVMe Ctrl (12341 ): 15288 I/Os completed (+3846) 00:10:08.529 00:10:09.467 QEMU NVMe Ctrl (12340 ): 19289 I/Os completed (+3692) 00:10:09.467 QEMU NVMe Ctrl (12341 ): 18987 I/Os completed (+3699) 00:10:09.467 00:10:10.407 QEMU NVMe Ctrl (12340 ): 23638 I/Os completed (+4349) 00:10:10.407 QEMU NVMe Ctrl (12341 ): 23363 I/Os completed (+4376) 00:10:10.407 00:10:11.790 QEMU NVMe Ctrl (12340 ): 27865 I/Os completed (+4227) 00:10:11.790 QEMU NVMe Ctrl (12341 ): 27589 I/Os completed (+4226) 00:10:11.790 00:10:12.361 QEMU NVMe Ctrl (12340 ): 32115 I/Os completed (+4250) 00:10:12.361 QEMU NVMe Ctrl (12341 ): 31824 I/Os completed (+4235) 00:10:12.361 00:10:13.746 QEMU NVMe Ctrl (12340 ): 36363 I/Os completed (+4248) 00:10:13.746 QEMU NVMe Ctrl (12341 ): 36055 I/Os completed (+4231) 00:10:13.746 00:10:14.688 QEMU NVMe Ctrl (12340 ): 40597 I/Os completed (+4234) 00:10:14.688 QEMU NVMe Ctrl (12341 ): 40273 I/Os completed (+4218) 00:10:14.688 00:10:15.632 QEMU NVMe Ctrl (12340 ): 44499 I/Os completed (+3902) 00:10:15.632 QEMU NVMe Ctrl (12341 ): 44131 I/Os completed (+3858) 00:10:15.632 00:10:16.576 QEMU NVMe Ctrl (12340 ): 47511 I/Os completed (+3012) 00:10:16.576 QEMU NVMe Ctrl (12341 ): 47152 I/Os completed (+3021) 00:10:16.576 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.837 [2024-11-29 18:23:36.590662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:16.837 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:16.837 [2024-11-29 18:23:36.591855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.591913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.591929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.591950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:16.837 [2024-11-29 18:23:36.593501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.593556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.593571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.593587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.837 [2024-11-29 18:23:36.612033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:16.837 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:16.837 [2024-11-29 18:23:36.613114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.613159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.613176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.613189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:16.837 [2024-11-29 18:23:36.614411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.614491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.614510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 [2024-11-29 18:23:36.614525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.837 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:16.837 EAL: Scan for (pci) bus failed. 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.837 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:17.098 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:17.098 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.098 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.099 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.099 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:17.099 Attaching to 0000:00:10.0 00:10:17.099 Attached to 0000:00:10.0 00:10:17.099 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:17.099 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.099 18:23:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:17.099 Attaching to 0000:00:11.0 00:10:17.099 Attached to 0000:00:11.0 00:10:17.099 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:17.099 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:17.099 [2024-11-29 18:23:36.901489] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:29.334 18:23:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:29.334 18:23:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:29.334 18:23:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.83 00:10:29.334 18:23:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.83 00:10:29.334 18:23:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:29.334 18:23:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:29.334 18:23:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:29.334 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 18:23:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:35.918 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78562 00:10:35.919 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78562) - No such process 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78562 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79112 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79112 00:10:35.919 18:23:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79112 ']' 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.919 18:23:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.919 [2024-11-29 18:23:54.989235] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:10:35.919 [2024-11-29 18:23:54.989391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79112 ] 00:10:35.919 [2024-11-29 18:23:55.145059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.919 [2024-11-29 18:23:55.174616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:36.181 18:23:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:36.181 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:36.182 18:23:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:42.764 18:24:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.764 18:24:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:42.764 18:24:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:42.764 18:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:42.764 [2024-11-29 18:24:01.928285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:42.764 [2024-11-29 18:24:01.929351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:01.929387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:01.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.764 [2024-11-29 18:24:01.929411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:01.929420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:01.929427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.764 [2024-11-29 18:24:01.929437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:01.929443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:01.929451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.764 [2024-11-29 18:24:01.929467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:01.929475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:01.929487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.764 [2024-11-29 18:24:02.328280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:42.764 [2024-11-29 18:24:02.329302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:02.329335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:02.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.764 [2024-11-29 18:24:02.329354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.764 [2024-11-29 18:24:02.329361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.764 [2024-11-29 18:24:02.329369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.765 [2024-11-29 18:24:02.329376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.765 [2024-11-29 18:24:02.329383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.765 [2024-11-29 18:24:02.329390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.765 [2024-11-29 18:24:02.329399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.765 [2024-11-29 18:24:02.329405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.765 [2024-11-29 18:24:02.329413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:42.765 18:24:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 18:24:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 18:24:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:42.765 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:43.025 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:43.026 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.026 18:24:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.250 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.250 18:24:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.250 18:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.250 18:24:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.251 18:24:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.251 18:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.251 18:24:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:55.251 18:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.251 [2024-11-29 18:24:14.828469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:55.251 [2024-11-29 18:24:14.829512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.251 [2024-11-29 18:24:14.829543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.251 [2024-11-29 18:24:14.829555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.251 [2024-11-29 18:24:14.829566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.251 [2024-11-29 18:24:14.829575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.251 [2024-11-29 18:24:14.829582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.251 [2024-11-29 18:24:14.829590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.251 [2024-11-29 18:24:14.829596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.251 [2024-11-29 18:24:14.829604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.251 [2024-11-29 18:24:14.829610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.251 [2024-11-29 18:24:14.829618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.251 [2024-11-29 18:24:14.829624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.511 [2024-11-29 18:24:15.228469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:55.511 [2024-11-29 18:24:15.229450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.511 [2024-11-29 18:24:15.229493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.511 [2024-11-29 18:24:15.229503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.511 [2024-11-29 18:24:15.229514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.511 [2024-11-29 18:24:15.229521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.511 [2024-11-29 18:24:15.229529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.511 [2024-11-29 18:24:15.229535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.511 [2024-11-29 18:24:15.229544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.511 [2024-11-29 18:24:15.229550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.511 [2024-11-29 18:24:15.229557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.511 [2024-11-29 18:24:15.229563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.511 [2024-11-29 18:24:15.229572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.511 18:24:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.511 18:24:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.511 18:24:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.511 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.771 18:24:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.095 18:24:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.095 18:24:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.095 18:24:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.095 [2024-11-29 18:24:27.628681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:08.095 [2024-11-29 18:24:27.629757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.095 [2024-11-29 18:24:27.629790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.095 [2024-11-29 18:24:27.629804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.095 [2024-11-29 18:24:27.629816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.095 [2024-11-29 18:24:27.629825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.095 [2024-11-29 18:24:27.629832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.095 [2024-11-29 18:24:27.629839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.095 [2024-11-29 18:24:27.629846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.095 [2024-11-29 18:24:27.629853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.095 [2024-11-29 18:24:27.629859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.095 [2024-11-29 18:24:27.629867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.095 [2024-11-29 18:24:27.629873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.095 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.096 18:24:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.096 18:24:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.096 18:24:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:08.096 18:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.356 [2024-11-29 18:24:28.028687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:08.356 [2024-11-29 18:24:28.029703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.356 [2024-11-29 18:24:28.029734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.356 [2024-11-29 18:24:28.029744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.356 [2024-11-29 18:24:28.029756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.356 [2024-11-29 18:24:28.029764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.356 [2024-11-29 18:24:28.029774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.356 [2024-11-29 18:24:28.029781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.356 [2024-11-29 18:24:28.029788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.356 [2024-11-29 18:24:28.029795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.356 [2024-11-29 18:24:28.029803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.356 [2024-11-29 18:24:28.029809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.356 [2024-11-29 18:24:28.029817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.356 18:24:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.356 18:24:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.356 18:24:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:08.356 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.615 18:24:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.61 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.61 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.61 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.61 2 00:11:20.844 remove_attach_helper took 44.61s to complete (handling 2 nvme drive(s)) 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:20.844 18:24:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:20.844 18:24:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.425 18:24:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.425 18:24:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.425 18:24:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:27.425 18:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:27.425 [2024-11-29 18:24:46.574549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:27.425 [2024-11-29 18:24:46.575355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.575390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.575402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.575416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.575425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.575432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.575440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.575447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.575468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.575475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.575485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.575495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.974552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:27.425 [2024-11-29 18:24:46.975306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.975340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.975350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.975360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.975368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.975376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.975382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.975390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.975396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 [2024-11-29 18:24:46.975404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.425 [2024-11-29 18:24:46.975410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.425 [2024-11-29 18:24:46.975419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.425 18:24:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.425 18:24:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.425 18:24:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.425 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.426 18:24:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.659 [2024-11-29 18:24:59.374786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:39.659 [2024-11-29 18:24:59.376058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.659 [2024-11-29 18:24:59.376106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.659 [2024-11-29 18:24:59.376123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.659 [2024-11-29 18:24:59.376141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.659 [2024-11-29 18:24:59.376153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.659 [2024-11-29 18:24:59.376163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.659 [2024-11-29 18:24:59.376178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.659 [2024-11-29 18:24:59.376187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.659 [2024-11-29 18:24:59.376198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.659 [2024-11-29 18:24:59.376207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.659 [2024-11-29 18:24:59.376217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.659 [2024-11-29 18:24:59.376226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.659 18:24:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.659 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.920 [2024-11-29 18:24:59.774788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:39.920 [2024-11-29 18:24:59.775896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.920 [2024-11-29 18:24:59.775947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.920 [2024-11-29 18:24:59.775961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.920 [2024-11-29 18:24:59.775977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.920 [2024-11-29 18:24:59.775986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.920 [2024-11-29 18:24:59.775997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.920 [2024-11-29 18:24:59.776006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.920 [2024-11-29 18:24:59.776017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.920 [2024-11-29 18:24:59.776025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.920 [2024-11-29 18:24:59.776036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.920 [2024-11-29 18:24:59.776046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.920 [2024-11-29 18:24:59.776057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.182 18:24:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.182 18:24:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.182 18:24:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:40.182 18:24:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.182 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.182 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.182 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.445 18:25:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.686 [2024-11-29 18:25:12.274966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.686 [2024-11-29 18:25:12.275784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.686 [2024-11-29 18:25:12.275807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.686 [2024-11-29 18:25:12.275820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.686 [2024-11-29 18:25:12.275831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.686 [2024-11-29 18:25:12.275842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.686 [2024-11-29 18:25:12.275849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.686 [2024-11-29 18:25:12.275857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.686 [2024-11-29 18:25:12.275864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.686 [2024-11-29 18:25:12.275871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.686 [2024-11-29 18:25:12.275877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.686 [2024-11-29 18:25:12.275885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.686 [2024-11-29 18:25:12.275891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.686 18:25:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:52.686 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.947 [2024-11-29 18:25:12.674969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.947 [2024-11-29 18:25:12.675704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.947 [2024-11-29 18:25:12.675732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.947 [2024-11-29 18:25:12.675742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.947 [2024-11-29 18:25:12.675752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.947 [2024-11-29 18:25:12.675760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.947 [2024-11-29 18:25:12.675768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.947 [2024-11-29 18:25:12.675774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.947 [2024-11-29 18:25:12.675784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.947 [2024-11-29 18:25:12.675790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.947 [2024-11-29 18:25:12.675798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.947 [2024-11-29 18:25:12.675804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.947 [2024-11-29 18:25:12.675812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.947 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.947 18:25:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.947 18:25:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 18:25:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.207 18:25:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.207 18:25:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:12:05.439 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:05.439 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79112 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79112 ']' 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79112 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79112 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.439 killing process with pid 79112 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79112' 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79112 00:12:05.439 18:25:25 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79112 00:12:05.698 18:25:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:05.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.220 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.220 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.481 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.481 00:12:06.481 real 2m28.044s 00:12:06.481 user 1m48.025s 00:12:06.481 sys 0m18.437s 00:12:06.481 18:25:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.481 ************************************ 00:12:06.481 END TEST sw_hotplug 00:12:06.481 ************************************ 00:12:06.481 18:25:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.481 18:25:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:06.481 18:25:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.481 18:25:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.481 18:25:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.481 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:12:06.481 ************************************ 00:12:06.481 START TEST nvme_xnvme 00:12:06.481 ************************************ 00:12:06.481 18:25:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.746 * Looking for test storage... 00:12:06.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.746 18:25:26 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.746 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.746 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.746 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.746 18:25:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.747 --rc genhtml_branch_coverage=1 00:12:06.747 --rc genhtml_function_coverage=1 00:12:06.747 --rc genhtml_legend=1 00:12:06.747 --rc geninfo_all_blocks=1 00:12:06.747 --rc geninfo_unexecuted_blocks=1 00:12:06.747 00:12:06.747 ' 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.747 --rc genhtml_branch_coverage=1 00:12:06.747 --rc genhtml_function_coverage=1 00:12:06.747 --rc genhtml_legend=1 00:12:06.747 --rc geninfo_all_blocks=1 00:12:06.747 --rc geninfo_unexecuted_blocks=1 00:12:06.747 00:12:06.747 ' 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.747 --rc genhtml_branch_coverage=1 00:12:06.747 --rc genhtml_function_coverage=1 00:12:06.747 --rc genhtml_legend=1 00:12:06.747 --rc geninfo_all_blocks=1 00:12:06.747 --rc geninfo_unexecuted_blocks=1 00:12:06.747 00:12:06.747 ' 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.747 --rc genhtml_branch_coverage=1 00:12:06.747 --rc genhtml_function_coverage=1 00:12:06.747 --rc genhtml_legend=1 00:12:06.747 --rc geninfo_all_blocks=1 00:12:06.747 --rc geninfo_unexecuted_blocks=1 00:12:06.747 00:12:06.747 ' 00:12:06.747 18:25:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:06.747 18:25:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:06.747 18:25:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:06.747 18:25:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:06.747 18:25:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:06.748 #define SPDK_CONFIG_H 00:12:06.748 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:06.748 #define SPDK_CONFIG_APPS 1 00:12:06.748 #define SPDK_CONFIG_ARCH native 00:12:06.748 #define SPDK_CONFIG_ASAN 1 00:12:06.748 #undef SPDK_CONFIG_AVAHI 00:12:06.748 #undef SPDK_CONFIG_CET 00:12:06.748 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:06.748 #define SPDK_CONFIG_COVERAGE 1 00:12:06.748 #define SPDK_CONFIG_CROSS_PREFIX 00:12:06.748 #undef SPDK_CONFIG_CRYPTO 00:12:06.748 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:06.748 #undef SPDK_CONFIG_CUSTOMOCF 00:12:06.748 #undef SPDK_CONFIG_DAOS 00:12:06.748 #define SPDK_CONFIG_DAOS_DIR 00:12:06.748 #define SPDK_CONFIG_DEBUG 1 00:12:06.748 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:06.748 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:06.748 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:06.748 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:06.748 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:06.748 #undef SPDK_CONFIG_DPDK_UADK 00:12:06.748 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:06.748 #define SPDK_CONFIG_EXAMPLES 1 00:12:06.748 #undef SPDK_CONFIG_FC 00:12:06.748 #define SPDK_CONFIG_FC_PATH 00:12:06.748 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:06.748 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:06.748 #define SPDK_CONFIG_FSDEV 1 00:12:06.748 #undef SPDK_CONFIG_FUSE 00:12:06.748 #undef SPDK_CONFIG_FUZZER 00:12:06.748 #define SPDK_CONFIG_FUZZER_LIB 00:12:06.748 #undef SPDK_CONFIG_GOLANG 00:12:06.748 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:06.748 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:06.748 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:06.748 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:06.748 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:06.748 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:06.748 #undef SPDK_CONFIG_HAVE_LZ4 00:12:06.748 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:06.748 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:06.748 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:06.748 #define SPDK_CONFIG_IDXD 1 00:12:06.748 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:06.748 #undef SPDK_CONFIG_IPSEC_MB 00:12:06.748 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:06.748 #define SPDK_CONFIG_ISAL 1 00:12:06.748 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:06.748 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:06.748 #define SPDK_CONFIG_LIBDIR 00:12:06.748 #undef SPDK_CONFIG_LTO 00:12:06.748 #define SPDK_CONFIG_MAX_LCORES 128 00:12:06.748 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:06.748 #define SPDK_CONFIG_NVME_CUSE 1 00:12:06.748 #undef SPDK_CONFIG_OCF 00:12:06.748 #define SPDK_CONFIG_OCF_PATH 00:12:06.748 #define SPDK_CONFIG_OPENSSL_PATH 00:12:06.748 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:06.748 #define SPDK_CONFIG_PGO_DIR 00:12:06.748 #undef SPDK_CONFIG_PGO_USE 00:12:06.748 #define SPDK_CONFIG_PREFIX /usr/local 00:12:06.748 #undef SPDK_CONFIG_RAID5F 00:12:06.748 #undef SPDK_CONFIG_RBD 00:12:06.748 #define SPDK_CONFIG_RDMA 1 00:12:06.748 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:06.748 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:06.748 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:06.748 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:06.748 #define SPDK_CONFIG_SHARED 1 00:12:06.748 #undef SPDK_CONFIG_SMA 00:12:06.748 #define SPDK_CONFIG_TESTS 1 00:12:06.748 #undef SPDK_CONFIG_TSAN 00:12:06.748 #define SPDK_CONFIG_UBLK 1 00:12:06.748 #define SPDK_CONFIG_UBSAN 1 00:12:06.748 #undef SPDK_CONFIG_UNIT_TESTS 00:12:06.748 #undef SPDK_CONFIG_URING 00:12:06.748 #define SPDK_CONFIG_URING_PATH 00:12:06.748 #undef SPDK_CONFIG_URING_ZNS 00:12:06.748 #undef SPDK_CONFIG_USDT 00:12:06.748 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:06.748 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:06.748 #undef SPDK_CONFIG_VFIO_USER 00:12:06.748 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:06.748 #define SPDK_CONFIG_VHOST 1 00:12:06.748 #define SPDK_CONFIG_VIRTIO 1 00:12:06.748 #undef SPDK_CONFIG_VTUNE 00:12:06.748 #define SPDK_CONFIG_VTUNE_DIR 00:12:06.748 #define SPDK_CONFIG_WERROR 1 00:12:06.748 #define SPDK_CONFIG_WPDK_DIR 00:12:06.748 #define SPDK_CONFIG_XNVME 1 00:12:06.748 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:06.748 18:25:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.748 18:25:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.748 18:25:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.748 18:25:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.748 18:25:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.748 18:25:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.748 18:25:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.748 18:25:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.748 18:25:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:06.748 18:25:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:06.748 18:25:26 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:06.748 18:25:26 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:06.749 18:25:26 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80444 ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80444 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.At1Xp7 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.At1Xp7/tests/xnvme /tmp/spdk.At1Xp7 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13211619328 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6372880384 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13211619328 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6372880384 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98425372672 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1277407232 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:06.750 * Looking for test storage... 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:06.750 18:25:26 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13211619328 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.751 18:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.751 18:25:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:07.014 18:25:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.014 18:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.014 --rc genhtml_branch_coverage=1 00:12:07.014 --rc genhtml_function_coverage=1 00:12:07.014 --rc genhtml_legend=1 00:12:07.014 --rc geninfo_all_blocks=1 00:12:07.014 --rc geninfo_unexecuted_blocks=1 00:12:07.014 00:12:07.014 ' 00:12:07.014 18:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.014 --rc genhtml_branch_coverage=1 00:12:07.014 --rc genhtml_function_coverage=1 00:12:07.014 --rc genhtml_legend=1 00:12:07.014 --rc geninfo_all_blocks=1 00:12:07.014 --rc geninfo_unexecuted_blocks=1 00:12:07.014 00:12:07.014 ' 00:12:07.014 18:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.014 --rc genhtml_branch_coverage=1 00:12:07.014 --rc genhtml_function_coverage=1 00:12:07.014 --rc genhtml_legend=1 00:12:07.014 --rc geninfo_all_blocks=1 00:12:07.014 --rc geninfo_unexecuted_blocks=1 00:12:07.014 00:12:07.014 ' 00:12:07.014 18:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.014 --rc genhtml_branch_coverage=1 00:12:07.014 --rc genhtml_function_coverage=1 00:12:07.014 --rc genhtml_legend=1 00:12:07.014 --rc geninfo_all_blocks=1 00:12:07.014 --rc geninfo_unexecuted_blocks=1 00:12:07.014 00:12:07.014 ' 00:12:07.014 18:25:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.014 18:25:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.014 18:25:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.014 18:25:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.014 18:25:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.014 18:25:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:07.014 18:25:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:07.014 18:25:26 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:07.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.276 Waiting for block devices as requested 00:12:07.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.538 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.538 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.538 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.823 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:12.823 18:25:32 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:13.084 18:25:32 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:13.084 18:25:32 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:13.084 18:25:32 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:13.084 18:25:32 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:13.084 18:25:32 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:13.084 18:25:32 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:13.084 18:25:32 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:13.345 No valid GPT data, bailing 00:12:13.345 18:25:33 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:13.345 18:25:33 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:13.345 18:25:33 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:13.345 18:25:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:13.345 18:25:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.345 18:25:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.345 18:25:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 ************************************ 00:12:13.345 START TEST xnvme_rpc 00:12:13.345 ************************************ 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80836 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80836 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80836 ']' 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.345 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.346 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.346 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.346 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.346 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:13.346 [2024-11-29 18:25:33.143470] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:13.346 [2024-11-29 18:25:33.143621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80836 ] 00:12:13.607 [2024-11-29 18:25:33.307966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.607 [2024-11-29 18:25:33.336793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.282 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.282 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:14.282 18:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:14.282 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.282 18:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 xnvme_bdev 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:14.282 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80836 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80836 ']' 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80836 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80836 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.283 killing process with pid 80836 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80836' 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80836 00:12:14.283 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80836 00:12:14.854 00:12:14.854 real 0m1.401s 00:12:14.854 user 0m1.445s 00:12:14.854 sys 0m0.413s 00:12:14.854 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.854 ************************************ 00:12:14.854 END TEST xnvme_rpc 00:12:14.854 ************************************ 00:12:14.854 18:25:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.854 18:25:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:14.854 18:25:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.854 18:25:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.854 18:25:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.854 ************************************ 00:12:14.854 START TEST xnvme_bdevperf 00:12:14.854 ************************************ 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:14.854 18:25:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.854 { 00:12:14.854 "subsystems": [ 00:12:14.854 { 00:12:14.854 "subsystem": "bdev", 00:12:14.854 "config": [ 00:12:14.854 { 00:12:14.854 "params": { 00:12:14.854 "io_mechanism": "libaio", 00:12:14.854 "conserve_cpu": false, 00:12:14.854 "filename": "/dev/nvme0n1", 00:12:14.854 "name": "xnvme_bdev" 00:12:14.854 }, 00:12:14.854 "method": "bdev_xnvme_create" 00:12:14.854 }, 00:12:14.854 { 00:12:14.854 "method": "bdev_wait_for_examine" 00:12:14.854 } 00:12:14.854 ] 00:12:14.854 } 00:12:14.854 ] 00:12:14.854 } 00:12:14.854 [2024-11-29 18:25:34.580525] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:14.854 [2024-11-29 18:25:34.580668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80894 ] 00:12:14.854 [2024-11-29 18:25:34.742181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.116 [2024-11-29 18:25:34.762437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.116 Running I/O for 5 seconds... 00:12:17.004 31171.00 IOPS, 121.76 MiB/s [2024-11-29T18:25:38.300Z] 29819.50 IOPS, 116.48 MiB/s [2024-11-29T18:25:38.874Z] 28270.33 IOPS, 110.43 MiB/s [2024-11-29T18:25:40.269Z] 27576.00 IOPS, 107.72 MiB/s 00:12:20.364 Latency(us) 00:12:20.364 [2024-11-29T18:25:40.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.364 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:20.364 xnvme_bdev : 5.00 27401.09 107.04 0.00 0.00 2330.83 247.34 7360.20 00:12:20.364 [2024-11-29T18:25:40.269Z] =================================================================================================================== 00:12:20.364 [2024-11-29T18:25:40.269Z] Total : 27401.09 107.04 0.00 0.00 2330.83 247.34 7360.20 00:12:20.364 18:25:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:20.364 18:25:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:20.364 18:25:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:20.364 18:25:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:20.364 18:25:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:20.364 { 00:12:20.364 "subsystems": [ 00:12:20.364 { 00:12:20.364 "subsystem": "bdev", 00:12:20.364 "config": [ 00:12:20.364 { 00:12:20.364 "params": { 00:12:20.364 "io_mechanism": "libaio", 00:12:20.364 "conserve_cpu": false, 00:12:20.364 "filename": "/dev/nvme0n1", 00:12:20.364 "name": "xnvme_bdev" 00:12:20.364 }, 00:12:20.364 "method": "bdev_xnvme_create" 00:12:20.364 }, 00:12:20.364 { 00:12:20.364 "method": "bdev_wait_for_examine" 00:12:20.364 } 00:12:20.364 ] 00:12:20.364 } 00:12:20.364 ] 00:12:20.364 } 00:12:20.364 [2024-11-29 18:25:40.220617] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:20.364 [2024-11-29 18:25:40.220763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80959 ] 00:12:20.625 [2024-11-29 18:25:40.384368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.625 [2024-11-29 18:25:40.423508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.885 Running I/O for 5 seconds... 00:12:22.771 29969.00 IOPS, 117.07 MiB/s [2024-11-29T18:25:43.620Z] 31664.00 IOPS, 123.69 MiB/s [2024-11-29T18:25:45.004Z] 32084.00 IOPS, 125.33 MiB/s [2024-11-29T18:25:45.948Z] 32340.00 IOPS, 126.33 MiB/s [2024-11-29T18:25:45.948Z] 32256.20 IOPS, 126.00 MiB/s 00:12:26.043 Latency(us) 00:12:26.043 [2024-11-29T18:25:45.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.043 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:26.043 xnvme_bdev : 5.01 32236.61 125.92 0.00 0.00 1980.66 437.96 6553.60 00:12:26.043 [2024-11-29T18:25:45.948Z] =================================================================================================================== 00:12:26.043 [2024-11-29T18:25:45.948Z] Total : 32236.61 125.92 0.00 0.00 1980.66 437.96 6553.60 00:12:26.043 00:12:26.043 real 0m11.293s 00:12:26.043 user 0m3.471s 00:12:26.043 sys 0m6.347s 00:12:26.043 18:25:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.043 ************************************ 00:12:26.043 18:25:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.043 END TEST xnvme_bdevperf 00:12:26.043 ************************************ 00:12:26.043 18:25:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:26.043 18:25:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.043 18:25:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.043 18:25:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.043 ************************************ 00:12:26.043 START TEST xnvme_fio_plugin 00:12:26.043 ************************************ 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:26.043 18:25:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.043 { 00:12:26.043 "subsystems": [ 00:12:26.043 { 00:12:26.043 "subsystem": "bdev", 00:12:26.043 "config": [ 00:12:26.043 { 00:12:26.043 "params": { 00:12:26.043 "io_mechanism": "libaio", 00:12:26.043 "conserve_cpu": false, 00:12:26.043 "filename": "/dev/nvme0n1", 00:12:26.043 "name": "xnvme_bdev" 00:12:26.043 }, 00:12:26.043 "method": "bdev_xnvme_create" 00:12:26.043 }, 00:12:26.043 { 00:12:26.043 "method": "bdev_wait_for_examine" 00:12:26.043 } 00:12:26.043 ] 00:12:26.043 } 00:12:26.043 ] 00:12:26.043 } 00:12:26.305 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:26.305 fio-3.35 00:12:26.305 Starting 1 thread 00:12:32.898 00:12:32.898 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81068: Fri Nov 29 18:25:51 2024 00:12:32.898 read: IOPS=32.2k, BW=126MiB/s (132MB/s)(630MiB/5001msec) 00:12:32.899 slat (usec): min=4, max=2172, avg=19.33, stdev=95.66 00:12:32.899 clat (usec): min=106, max=4818, avg=1450.87, stdev=523.43 00:12:32.899 lat (usec): min=184, max=4856, avg=1470.21, stdev=513.54 00:12:32.899 clat percentiles (usec): 00:12:32.899 | 1.00th=[ 302], 5.00th=[ 603], 10.00th=[ 791], 20.00th=[ 1020], 00:12:32.899 | 30.00th=[ 1188], 40.00th=[ 1336], 50.00th=[ 1467], 60.00th=[ 1582], 00:12:32.899 | 70.00th=[ 1696], 80.00th=[ 1827], 90.00th=[ 2057], 95.00th=[ 2278], 00:12:32.899 | 99.00th=[ 2933], 99.50th=[ 3359], 99.90th=[ 3982], 99.95th=[ 4113], 00:12:32.899 | 99.99th=[ 4555] 00:12:32.899 bw ( KiB/s): min=123576, max=135488, per=100.00%, avg=129321.78, stdev=4075.05, samples=9 00:12:32.899 iops : min=30894, max=33872, avg=32330.44, stdev=1018.76, samples=9 00:12:32.899 lat (usec) : 250=0.49%, 500=2.54%, 750=5.62%, 1000=10.41% 00:12:32.899 lat (msec) : 2=69.12%, 4=11.74%, 10=0.09% 00:12:32.899 cpu : usr=49.48%, sys=42.48%, ctx=16, majf=0, minf=773 00:12:32.899 IO depths : 1=0.7%, 2=1.5%, 4=3.5%, 8=8.6%, 16=22.5%, 32=61.0%, >=64=2.1% 00:12:32.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.899 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:32.899 issued rwts: total=161217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:32.899 00:12:32.899 Run status group 0 (all jobs): 00:12:32.899 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=630MiB (660MB), run=5001-5001msec 00:12:32.899 ----------------------------------------------------- 00:12:32.899 Suppressions used: 00:12:32.899 count bytes template 00:12:32.899 1 11 /usr/src/fio/parse.c 00:12:32.899 1 8 libtcmalloc_minimal.so 00:12:32.899 1 904 libcrypto.so 00:12:32.899 ----------------------------------------------------- 00:12:32.899 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:32.899 18:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:32.899 { 00:12:32.899 "subsystems": [ 00:12:32.899 { 00:12:32.899 "subsystem": "bdev", 00:12:32.899 "config": [ 00:12:32.899 { 00:12:32.899 "params": { 00:12:32.899 "io_mechanism": "libaio", 00:12:32.899 "conserve_cpu": false, 00:12:32.899 "filename": "/dev/nvme0n1", 00:12:32.899 "name": "xnvme_bdev" 00:12:32.899 }, 00:12:32.899 "method": "bdev_xnvme_create" 00:12:32.899 }, 00:12:32.899 { 00:12:32.899 "method": "bdev_wait_for_examine" 00:12:32.899 } 00:12:32.899 ] 00:12:32.899 } 00:12:32.899 ] 00:12:32.899 } 00:12:32.899 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:32.899 fio-3.35 00:12:32.899 Starting 1 thread 00:12:38.190 00:12:38.190 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81149: Fri Nov 29 18:25:57 2024 00:12:38.190 write: IOPS=34.3k, BW=134MiB/s (141MB/s)(671MiB/5001msec); 0 zone resets 00:12:38.190 slat (usec): min=4, max=2081, avg=20.39, stdev=88.44 00:12:38.190 clat (usec): min=96, max=5793, avg=1304.06, stdev=525.28 00:12:38.190 lat (usec): min=187, max=5817, avg=1324.45, stdev=516.95 00:12:38.190 clat percentiles (usec): 00:12:38.190 | 1.00th=[ 277], 5.00th=[ 498], 10.00th=[ 652], 20.00th=[ 873], 00:12:38.190 | 30.00th=[ 1020], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1401], 00:12:38.190 | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1942], 95.00th=[ 2180], 00:12:38.190 | 99.00th=[ 2868], 99.50th=[ 3130], 99.90th=[ 3818], 99.95th=[ 4047], 00:12:38.190 | 99.99th=[ 4490] 00:12:38.190 bw ( KiB/s): min=127712, max=145384, per=99.92%, avg=137245.33, stdev=4835.48, samples=9 00:12:38.190 iops : min=31928, max=36346, avg=34311.33, stdev=1208.87, samples=9 00:12:38.190 lat (usec) : 100=0.01%, 250=0.70%, 500=4.34%, 750=8.84%, 1000=14.57% 00:12:38.190 lat (msec) : 2=63.30%, 4=8.20%, 10=0.06% 00:12:38.190 cpu : usr=42.74%, sys=47.88%, ctx=19, majf=0, minf=774 00:12:38.190 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.3%, 16=22.5%, 32=62.2%, >=64=2.1% 00:12:38.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.190 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:38.190 issued rwts: total=0,171736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:38.190 00:12:38.190 Run status group 0 (all jobs): 00:12:38.190 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=671MiB (703MB), run=5001-5001msec 00:12:38.190 ----------------------------------------------------- 00:12:38.190 Suppressions used: 00:12:38.190 count bytes template 00:12:38.190 1 11 /usr/src/fio/parse.c 00:12:38.190 1 8 libtcmalloc_minimal.so 00:12:38.190 1 904 libcrypto.so 00:12:38.190 ----------------------------------------------------- 00:12:38.190 00:12:38.190 00:12:38.190 real 0m12.126s 00:12:38.190 user 0m5.753s 00:12:38.190 sys 0m5.120s 00:12:38.190 18:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.190 18:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:38.190 ************************************ 00:12:38.190 END TEST xnvme_fio_plugin 00:12:38.190 ************************************ 00:12:38.190 18:25:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:38.190 18:25:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:38.190 18:25:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:38.190 18:25:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:38.190 18:25:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.190 18:25:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.190 18:25:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.190 ************************************ 00:12:38.190 START TEST xnvme_rpc 00:12:38.190 ************************************ 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81230 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81230 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81230 ']' 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.190 18:25:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.452 [2024-11-29 18:25:58.163792] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:38.452 [2024-11-29 18:25:58.163959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81230 ] 00:12:38.452 [2024-11-29 18:25:58.328823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.712 [2024-11-29 18:25:58.358798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 xnvme_bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81230 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81230 ']' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81230 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.285 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81230 00:12:39.546 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.546 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.546 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81230' 00:12:39.546 killing process with pid 81230 00:12:39.546 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81230 00:12:39.546 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81230 00:12:39.808 00:12:39.808 real 0m1.447s 00:12:39.808 user 0m1.454s 00:12:39.808 sys 0m0.460s 00:12:39.808 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.808 18:25:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 ************************************ 00:12:39.808 END TEST xnvme_rpc 00:12:39.808 ************************************ 00:12:39.808 18:25:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:39.808 18:25:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.808 18:25:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.808 18:25:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 ************************************ 00:12:39.808 START TEST xnvme_bdevperf 00:12:39.808 ************************************ 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:39.808 18:25:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 { 00:12:39.808 "subsystems": [ 00:12:39.808 { 00:12:39.808 "subsystem": "bdev", 00:12:39.808 "config": [ 00:12:39.808 { 00:12:39.808 "params": { 00:12:39.808 "io_mechanism": "libaio", 00:12:39.808 "conserve_cpu": true, 00:12:39.808 "filename": "/dev/nvme0n1", 00:12:39.808 "name": "xnvme_bdev" 00:12:39.808 }, 00:12:39.808 "method": "bdev_xnvme_create" 00:12:39.808 }, 00:12:39.808 { 00:12:39.809 "method": "bdev_wait_for_examine" 00:12:39.809 } 00:12:39.809 ] 00:12:39.809 } 00:12:39.809 ] 00:12:39.809 } 00:12:39.809 [2024-11-29 18:25:59.673371] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:39.809 [2024-11-29 18:25:59.673547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81287 ] 00:12:40.070 [2024-11-29 18:25:59.840372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.070 [2024-11-29 18:25:59.869070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.331 Running I/O for 5 seconds... 00:12:42.217 27848.00 IOPS, 108.78 MiB/s [2024-11-29T18:26:03.065Z] 28586.00 IOPS, 111.66 MiB/s [2024-11-29T18:26:04.007Z] 29169.33 IOPS, 113.94 MiB/s [2024-11-29T18:26:05.392Z] 28238.00 IOPS, 110.30 MiB/s [2024-11-29T18:26:05.392Z] 27488.00 IOPS, 107.38 MiB/s 00:12:45.487 Latency(us) 00:12:45.487 [2024-11-29T18:26:05.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.487 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:45.487 xnvme_bdev : 5.01 27449.03 107.22 0.00 0.00 2326.91 447.41 8318.03 00:12:45.487 [2024-11-29T18:26:05.392Z] =================================================================================================================== 00:12:45.487 [2024-11-29T18:26:05.392Z] Total : 27449.03 107.22 0.00 0.00 2326.91 447.41 8318.03 00:12:45.487 18:26:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.487 18:26:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:45.487 18:26:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:45.487 18:26:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.487 18:26:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:45.487 { 00:12:45.487 "subsystems": [ 00:12:45.487 { 00:12:45.487 "subsystem": "bdev", 00:12:45.487 "config": [ 00:12:45.487 { 00:12:45.487 "params": { 00:12:45.487 "io_mechanism": "libaio", 00:12:45.487 "conserve_cpu": true, 00:12:45.487 "filename": "/dev/nvme0n1", 00:12:45.487 "name": "xnvme_bdev" 00:12:45.487 }, 00:12:45.487 "method": "bdev_xnvme_create" 00:12:45.487 }, 00:12:45.487 { 00:12:45.487 "method": "bdev_wait_for_examine" 00:12:45.487 } 00:12:45.487 ] 00:12:45.487 } 00:12:45.487 ] 00:12:45.487 } 00:12:45.487 [2024-11-29 18:26:05.285419] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:45.487 [2024-11-29 18:26:05.285611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81351 ] 00:12:45.748 [2024-11-29 18:26:05.449912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.748 [2024-11-29 18:26:05.477947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.748 Running I/O for 5 seconds... 00:12:47.786 31092.00 IOPS, 121.45 MiB/s [2024-11-29T18:26:08.635Z] 29436.50 IOPS, 114.99 MiB/s [2024-11-29T18:26:10.022Z] 29208.33 IOPS, 114.10 MiB/s [2024-11-29T18:26:10.969Z] 29176.25 IOPS, 113.97 MiB/s 00:12:51.064 Latency(us) 00:12:51.064 [2024-11-29T18:26:10.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.064 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:51.064 xnvme_bdev : 5.00 28915.26 112.95 0.00 0.00 2208.23 437.96 6427.57 00:12:51.064 [2024-11-29T18:26:10.969Z] =================================================================================================================== 00:12:51.064 [2024-11-29T18:26:10.969Z] Total : 28915.26 112.95 0.00 0.00 2208.23 437.96 6427.57 00:12:51.064 00:12:51.064 real 0m11.213s 00:12:51.064 user 0m3.241s 00:12:51.064 sys 0m6.499s 00:12:51.064 18:26:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.064 18:26:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.064 ************************************ 00:12:51.064 END TEST xnvme_bdevperf 00:12:51.064 ************************************ 00:12:51.064 18:26:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:51.064 18:26:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.064 18:26:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.064 18:26:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.064 ************************************ 00:12:51.064 START TEST xnvme_fio_plugin 00:12:51.064 ************************************ 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.064 18:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.064 { 00:12:51.064 "subsystems": [ 00:12:51.064 { 00:12:51.064 "subsystem": "bdev", 00:12:51.064 "config": [ 00:12:51.064 { 00:12:51.064 "params": { 00:12:51.064 "io_mechanism": "libaio", 00:12:51.064 "conserve_cpu": true, 00:12:51.064 "filename": "/dev/nvme0n1", 00:12:51.064 "name": "xnvme_bdev" 00:12:51.064 }, 00:12:51.064 "method": "bdev_xnvme_create" 00:12:51.064 }, 00:12:51.064 { 00:12:51.064 "method": "bdev_wait_for_examine" 00:12:51.064 } 00:12:51.064 ] 00:12:51.064 } 00:12:51.064 ] 00:12:51.064 } 00:12:51.327 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:51.327 fio-3.35 00:12:51.327 Starting 1 thread 00:12:57.920 00:12:57.920 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81459: Fri Nov 29 18:26:16 2024 00:12:57.920 read: IOPS=27.0k, BW=105MiB/s (111MB/s)(528MiB/5001msec) 00:12:57.920 slat (usec): min=4, max=2765, avg=31.15, stdev=110.92 00:12:57.920 clat (usec): min=100, max=5723, avg=1519.34, stdev=643.39 00:12:57.920 lat (usec): min=183, max=5773, avg=1550.49, stdev=632.58 00:12:57.920 clat percentiles (usec): 00:12:57.920 | 1.00th=[ 273], 5.00th=[ 545], 10.00th=[ 717], 20.00th=[ 963], 00:12:57.920 | 30.00th=[ 1156], 40.00th=[ 1336], 50.00th=[ 1500], 60.00th=[ 1647], 00:12:57.920 | 70.00th=[ 1811], 80.00th=[ 2008], 90.00th=[ 2343], 95.00th=[ 2638], 00:12:57.920 | 99.00th=[ 3294], 99.50th=[ 3523], 99.90th=[ 4047], 99.95th=[ 4359], 00:12:57.920 | 99.99th=[ 5080] 00:12:57.920 bw ( KiB/s): min=102760, max=115248, per=100.00%, avg=108272.89, stdev=4629.81, samples=9 00:12:57.920 iops : min=25690, max=28812, avg=27068.22, stdev=1157.45, samples=9 00:12:57.920 lat (usec) : 250=0.73%, 500=3.51%, 750=6.94%, 1000=10.66% 00:12:57.920 lat (msec) : 2=57.71%, 4=20.31%, 10=0.13% 00:12:57.920 cpu : usr=27.44%, sys=62.80%, ctx=13, majf=0, minf=773 00:12:57.920 IO depths : 1=0.3%, 2=0.8%, 4=2.6%, 8=8.4%, 16=24.7%, 32=61.2%, >=64=2.0% 00:12:57.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.920 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:57.920 issued rwts: total=135042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.920 00:12:57.920 Run status group 0 (all jobs): 00:12:57.920 READ: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=528MiB (553MB), run=5001-5001msec 00:12:57.920 ----------------------------------------------------- 00:12:57.920 Suppressions used: 00:12:57.920 count bytes template 00:12:57.920 1 11 /usr/src/fio/parse.c 00:12:57.921 1 8 libtcmalloc_minimal.so 00:12:57.921 1 904 libcrypto.so 00:12:57.921 ----------------------------------------------------- 00:12:57.921 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.921 18:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.921 { 00:12:57.921 "subsystems": [ 00:12:57.921 { 00:12:57.921 "subsystem": "bdev", 00:12:57.921 "config": [ 00:12:57.921 { 00:12:57.921 "params": { 00:12:57.921 "io_mechanism": "libaio", 00:12:57.921 "conserve_cpu": true, 00:12:57.921 "filename": "/dev/nvme0n1", 00:12:57.921 "name": "xnvme_bdev" 00:12:57.921 }, 00:12:57.921 "method": "bdev_xnvme_create" 00:12:57.921 }, 00:12:57.921 { 00:12:57.921 "method": "bdev_wait_for_examine" 00:12:57.921 } 00:12:57.921 ] 00:12:57.921 } 00:12:57.921 ] 00:12:57.921 } 00:12:57.921 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:57.921 fio-3.35 00:12:57.921 Starting 1 thread 00:13:03.214 00:13:03.215 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81545: Fri Nov 29 18:26:22 2024 00:13:03.215 write: IOPS=32.6k, BW=127MiB/s (133MB/s)(636MiB/5001msec); 0 zone resets 00:13:03.215 slat (usec): min=4, max=2191, avg=20.71, stdev=98.15 00:13:03.215 clat (usec): min=107, max=6511, avg=1399.41, stdev=495.79 00:13:03.215 lat (usec): min=214, max=6517, avg=1420.11, stdev=484.27 00:13:03.215 clat percentiles (usec): 00:13:03.215 | 1.00th=[ 302], 5.00th=[ 611], 10.00th=[ 766], 20.00th=[ 988], 00:13:03.215 | 30.00th=[ 1156], 40.00th=[ 1287], 50.00th=[ 1401], 60.00th=[ 1516], 00:13:03.215 | 70.00th=[ 1647], 80.00th=[ 1778], 90.00th=[ 1975], 95.00th=[ 2180], 00:13:03.215 | 99.00th=[ 2769], 99.50th=[ 3064], 99.90th=[ 3556], 99.95th=[ 3785], 00:13:03.215 | 99.99th=[ 4555] 00:13:03.215 bw ( KiB/s): min=121424, max=138864, per=99.59%, avg=129767.11, stdev=5459.78, samples=9 00:13:03.215 iops : min=30356, max=34716, avg=32441.78, stdev=1364.94, samples=9 00:13:03.215 lat (usec) : 250=0.50%, 500=2.60%, 750=6.37%, 1000=11.22% 00:13:03.215 lat (msec) : 2=70.10%, 4=9.18%, 10=0.03% 00:13:03.215 cpu : usr=46.92%, sys=45.30%, ctx=11, majf=0, minf=774 00:13:03.215 IO depths : 1=0.6%, 2=1.4%, 4=3.2%, 8=8.1%, 16=22.1%, 32=62.4%, >=64=2.1% 00:13:03.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.215 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:03.215 issued rwts: total=0,162911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:03.215 00:13:03.215 Run status group 0 (all jobs): 00:13:03.215 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=636MiB (667MB), run=5001-5001msec 00:13:03.215 ----------------------------------------------------- 00:13:03.215 Suppressions used: 00:13:03.215 count bytes template 00:13:03.215 1 11 /usr/src/fio/parse.c 00:13:03.215 1 8 libtcmalloc_minimal.so 00:13:03.215 1 904 libcrypto.so 00:13:03.215 ----------------------------------------------------- 00:13:03.215 00:13:03.215 ************************************ 00:13:03.215 END TEST xnvme_fio_plugin 00:13:03.215 ************************************ 00:13:03.215 00:13:03.215 real 0m12.139s 00:13:03.215 user 0m4.886s 00:13:03.215 sys 0m5.997s 00:13:03.215 18:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.215 18:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:03.215 18:26:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:03.215 18:26:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.215 18:26:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.215 18:26:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.215 ************************************ 00:13:03.215 START TEST xnvme_rpc 00:13:03.215 ************************************ 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:03.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81631 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81631 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81631 ']' 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.215 18:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.477 [2024-11-29 18:26:23.171494] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:03.477 [2024-11-29 18:26:23.171870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81631 ] 00:13:03.477 [2024-11-29 18:26:23.333638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.477 [2024-11-29 18:26:23.362429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 xnvme_bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81631 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81631 ']' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81631 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81631 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.422 killing process with pid 81631 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81631' 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81631 00:13:04.422 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81631 00:13:04.685 ************************************ 00:13:04.685 END TEST xnvme_rpc 00:13:04.685 ************************************ 00:13:04.685 00:13:04.685 real 0m1.452s 00:13:04.685 user 0m1.519s 00:13:04.685 sys 0m0.415s 00:13:04.685 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.685 18:26:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.947 18:26:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:04.947 18:26:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.947 18:26:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.947 18:26:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.947 ************************************ 00:13:04.947 START TEST xnvme_bdevperf 00:13:04.947 ************************************ 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.947 18:26:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.947 { 00:13:04.947 "subsystems": [ 00:13:04.947 { 00:13:04.947 "subsystem": "bdev", 00:13:04.947 "config": [ 00:13:04.947 { 00:13:04.947 "params": { 00:13:04.947 "io_mechanism": "io_uring", 00:13:04.947 "conserve_cpu": false, 00:13:04.947 "filename": "/dev/nvme0n1", 00:13:04.947 "name": "xnvme_bdev" 00:13:04.947 }, 00:13:04.947 "method": "bdev_xnvme_create" 00:13:04.947 }, 00:13:04.947 { 00:13:04.947 "method": "bdev_wait_for_examine" 00:13:04.947 } 00:13:04.947 ] 00:13:04.947 } 00:13:04.947 ] 00:13:04.947 } 00:13:04.947 [2024-11-29 18:26:24.680624] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:04.947 [2024-11-29 18:26:24.680991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81683 ] 00:13:04.947 [2024-11-29 18:26:24.849750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.209 [2024-11-29 18:26:24.878761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.209 Running I/O for 5 seconds... 00:13:07.096 32364.00 IOPS, 126.42 MiB/s [2024-11-29T18:26:28.390Z] 32460.00 IOPS, 126.80 MiB/s [2024-11-29T18:26:29.335Z] 32877.33 IOPS, 128.43 MiB/s [2024-11-29T18:26:30.279Z] 33285.25 IOPS, 130.02 MiB/s 00:13:10.374 Latency(us) 00:13:10.374 [2024-11-29T18:26:30.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.374 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.374 xnvme_bdev : 5.00 33315.55 130.14 0.00 0.00 1917.50 381.24 12703.90 00:13:10.374 [2024-11-29T18:26:30.279Z] =================================================================================================================== 00:13:10.374 [2024-11-29T18:26:30.279Z] Total : 33315.55 130.14 0.00 0.00 1917.50 381.24 12703.90 00:13:10.374 18:26:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.374 18:26:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:10.374 18:26:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:10.374 18:26:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.374 18:26:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.374 { 00:13:10.374 "subsystems": [ 00:13:10.374 { 00:13:10.374 "subsystem": "bdev", 00:13:10.374 "config": [ 00:13:10.374 { 00:13:10.374 "params": { 00:13:10.374 "io_mechanism": "io_uring", 00:13:10.374 "conserve_cpu": false, 00:13:10.374 "filename": "/dev/nvme0n1", 00:13:10.374 "name": "xnvme_bdev" 00:13:10.374 }, 00:13:10.374 "method": "bdev_xnvme_create" 00:13:10.374 }, 00:13:10.374 { 00:13:10.374 "method": "bdev_wait_for_examine" 00:13:10.374 } 00:13:10.374 ] 00:13:10.374 } 00:13:10.374 ] 00:13:10.374 } 00:13:10.374 [2024-11-29 18:26:30.255974] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:10.374 [2024-11-29 18:26:30.256328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81753 ] 00:13:10.639 [2024-11-29 18:26:30.419878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.639 [2024-11-29 18:26:30.448824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.900 Running I/O for 5 seconds... 00:13:12.789 33733.00 IOPS, 131.77 MiB/s [2024-11-29T18:26:33.639Z] 33563.50 IOPS, 131.11 MiB/s [2024-11-29T18:26:34.584Z] 33503.00 IOPS, 130.87 MiB/s [2024-11-29T18:26:35.970Z] 34370.25 IOPS, 134.26 MiB/s [2024-11-29T18:26:35.970Z] 34362.80 IOPS, 134.23 MiB/s 00:13:16.065 Latency(us) 00:13:16.065 [2024-11-29T18:26:35.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.065 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:16.065 xnvme_bdev : 5.00 34341.70 134.15 0.00 0.00 1860.00 333.98 8469.27 00:13:16.065 [2024-11-29T18:26:35.970Z] =================================================================================================================== 00:13:16.065 [2024-11-29T18:26:35.970Z] Total : 34341.70 134.15 0.00 0.00 1860.00 333.98 8469.27 00:13:16.065 00:13:16.065 real 0m11.144s 00:13:16.065 user 0m4.573s 00:13:16.065 sys 0m6.324s 00:13:16.065 18:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.065 18:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.065 ************************************ 00:13:16.065 END TEST xnvme_bdevperf 00:13:16.065 ************************************ 00:13:16.065 18:26:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:16.065 18:26:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.065 18:26:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.065 18:26:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.065 ************************************ 00:13:16.065 START TEST xnvme_fio_plugin 00:13:16.065 ************************************ 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:16.065 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.066 18:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.066 { 00:13:16.066 "subsystems": [ 00:13:16.066 { 00:13:16.066 "subsystem": "bdev", 00:13:16.066 "config": [ 00:13:16.066 { 00:13:16.066 "params": { 00:13:16.066 "io_mechanism": "io_uring", 00:13:16.066 "conserve_cpu": false, 00:13:16.066 "filename": "/dev/nvme0n1", 00:13:16.066 "name": "xnvme_bdev" 00:13:16.066 }, 00:13:16.066 "method": "bdev_xnvme_create" 00:13:16.066 }, 00:13:16.066 { 00:13:16.066 "method": "bdev_wait_for_examine" 00:13:16.066 } 00:13:16.066 ] 00:13:16.066 } 00:13:16.066 ] 00:13:16.066 } 00:13:16.326 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:16.326 fio-3.35 00:13:16.326 Starting 1 thread 00:13:21.619 00:13:21.619 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81861: Fri Nov 29 18:26:41 2024 00:13:21.619 read: IOPS=31.6k, BW=123MiB/s (129MB/s)(617MiB/5001msec) 00:13:21.619 slat (usec): min=2, max=116, avg= 3.35, stdev= 1.74 00:13:21.619 clat (usec): min=960, max=4709, avg=1889.66, stdev=313.90 00:13:21.619 lat (usec): min=963, max=4712, avg=1893.01, stdev=314.11 00:13:21.619 clat percentiles (usec): 00:13:21.619 | 1.00th=[ 1303], 5.00th=[ 1434], 10.00th=[ 1516], 20.00th=[ 1614], 00:13:21.619 | 30.00th=[ 1713], 40.00th=[ 1778], 50.00th=[ 1860], 60.00th=[ 1942], 00:13:21.619 | 70.00th=[ 2024], 80.00th=[ 2147], 90.00th=[ 2311], 95.00th=[ 2474], 00:13:21.619 | 99.00th=[ 2737], 99.50th=[ 2835], 99.90th=[ 3097], 99.95th=[ 3195], 00:13:21.619 | 99.99th=[ 3687] 00:13:21.619 bw ( KiB/s): min=121344, max=133120, per=99.56%, avg=125781.11, stdev=3318.71, samples=9 00:13:21.619 iops : min=30336, max=33280, avg=31445.22, stdev=829.66, samples=9 00:13:21.619 lat (usec) : 1000=0.01% 00:13:21.619 lat (msec) : 2=67.29%, 4=32.70%, 10=0.01% 00:13:21.619 cpu : usr=31.34%, sys=67.56%, ctx=15, majf=0, minf=771 00:13:21.619 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:21.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.619 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:21.619 issued rwts: total=157951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.619 00:13:21.619 Run status group 0 (all jobs): 00:13:21.619 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=617MiB (647MB), run=5001-5001msec 00:13:22.193 ----------------------------------------------------- 00:13:22.193 Suppressions used: 00:13:22.193 count bytes template 00:13:22.193 1 11 /usr/src/fio/parse.c 00:13:22.193 1 8 libtcmalloc_minimal.so 00:13:22.193 1 904 libcrypto.so 00:13:22.193 ----------------------------------------------------- 00:13:22.193 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.194 18:26:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.194 { 00:13:22.194 "subsystems": [ 00:13:22.194 { 00:13:22.194 "subsystem": "bdev", 00:13:22.194 "config": [ 00:13:22.194 { 00:13:22.194 "params": { 00:13:22.194 "io_mechanism": "io_uring", 00:13:22.194 "conserve_cpu": false, 00:13:22.194 "filename": "/dev/nvme0n1", 00:13:22.194 "name": "xnvme_bdev" 00:13:22.194 }, 00:13:22.194 "method": "bdev_xnvme_create" 00:13:22.194 }, 00:13:22.194 { 00:13:22.194 "method": "bdev_wait_for_examine" 00:13:22.194 } 00:13:22.194 ] 00:13:22.194 } 00:13:22.194 ] 00:13:22.194 } 00:13:22.456 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:22.456 fio-3.35 00:13:22.456 Starting 1 thread 00:13:27.820 00:13:27.820 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81942: Fri Nov 29 18:26:47 2024 00:13:27.820 write: IOPS=32.9k, BW=129MiB/s (135MB/s)(643MiB/5001msec); 0 zone resets 00:13:27.820 slat (nsec): min=2913, max=73538, avg=3880.56, stdev=1457.85 00:13:27.820 clat (usec): min=144, max=6827, avg=1794.13, stdev=306.29 00:13:27.820 lat (usec): min=153, max=6831, avg=1798.01, stdev=306.40 00:13:27.820 clat percentiles (usec): 00:13:27.820 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1532], 00:13:27.820 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1860], 00:13:27.820 | 70.00th=[ 1942], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:27.820 | 99.00th=[ 2606], 99.50th=[ 2802], 99.90th=[ 3195], 99.95th=[ 3458], 00:13:27.820 | 99.99th=[ 4146] 00:13:27.820 bw ( KiB/s): min=128840, max=135240, per=100.00%, avg=132131.56, stdev=2287.74, samples=9 00:13:27.820 iops : min=32210, max=33810, avg=33032.89, stdev=571.94, samples=9 00:13:27.820 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:13:27.820 lat (msec) : 2=75.95%, 4=23.99%, 10=0.01% 00:13:27.821 cpu : usr=33.80%, sys=65.20%, ctx=13, majf=0, minf=772 00:13:27.821 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:27.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.821 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:27.821 issued rwts: total=0,164627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.821 00:13:27.821 Run status group 0 (all jobs): 00:13:27.821 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=643MiB (674MB), run=5001-5001msec 00:13:28.394 ----------------------------------------------------- 00:13:28.394 Suppressions used: 00:13:28.394 count bytes template 00:13:28.394 1 11 /usr/src/fio/parse.c 00:13:28.394 1 8 libtcmalloc_minimal.so 00:13:28.394 1 904 libcrypto.so 00:13:28.394 ----------------------------------------------------- 00:13:28.394 00:13:28.394 00:13:28.394 real 0m12.198s 00:13:28.394 user 0m4.552s 00:13:28.394 sys 0m7.234s 00:13:28.394 ************************************ 00:13:28.394 END TEST xnvme_fio_plugin 00:13:28.394 ************************************ 00:13:28.394 18:26:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.394 18:26:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.394 18:26:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:28.394 18:26:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:28.394 18:26:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:28.394 18:26:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:28.394 18:26:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.394 18:26:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.394 18:26:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.394 ************************************ 00:13:28.394 START TEST xnvme_rpc 00:13:28.394 ************************************ 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82017 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82017 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82017 ']' 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.394 18:26:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.394 [2024-11-29 18:26:48.197480] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:28.394 [2024-11-29 18:26:48.197806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82017 ] 00:13:28.655 [2024-11-29 18:26:48.361231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.655 [2024-11-29 18:26:48.402106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.227 xnvme_bdev 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.227 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.228 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.228 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.488 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82017 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82017 ']' 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82017 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82017 00:13:29.489 killing process with pid 82017 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82017' 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82017 00:13:29.489 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82017 00:13:29.750 00:13:29.750 real 0m1.435s 00:13:29.750 user 0m1.477s 00:13:29.750 sys 0m0.460s 00:13:29.750 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.750 ************************************ 00:13:29.750 END TEST xnvme_rpc 00:13:29.750 ************************************ 00:13:29.750 18:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.750 18:26:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:29.750 18:26:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.750 18:26:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.750 18:26:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.750 ************************************ 00:13:29.750 START TEST xnvme_bdevperf 00:13:29.750 ************************************ 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.750 18:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.750 { 00:13:29.750 "subsystems": [ 00:13:29.750 { 00:13:29.750 "subsystem": "bdev", 00:13:29.750 "config": [ 00:13:29.750 { 00:13:29.750 "params": { 00:13:29.750 "io_mechanism": "io_uring", 00:13:29.750 "conserve_cpu": true, 00:13:29.750 "filename": "/dev/nvme0n1", 00:13:29.750 "name": "xnvme_bdev" 00:13:29.750 }, 00:13:29.750 "method": "bdev_xnvme_create" 00:13:29.750 }, 00:13:29.750 { 00:13:29.750 "method": "bdev_wait_for_examine" 00:13:29.750 } 00:13:29.750 ] 00:13:29.750 } 00:13:29.750 ] 00:13:29.750 } 00:13:30.012 [2024-11-29 18:26:49.688486] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:30.012 [2024-11-29 18:26:49.688638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82080 ] 00:13:30.012 [2024-11-29 18:26:49.853657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.012 [2024-11-29 18:26:49.882070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.273 Running I/O for 5 seconds... 00:13:32.160 32560.00 IOPS, 127.19 MiB/s [2024-11-29T18:26:53.008Z] 32293.50 IOPS, 126.15 MiB/s [2024-11-29T18:26:54.394Z] 32582.00 IOPS, 127.27 MiB/s [2024-11-29T18:26:55.338Z] 32615.75 IOPS, 127.41 MiB/s [2024-11-29T18:26:55.338Z] 32497.40 IOPS, 126.94 MiB/s 00:13:35.433 Latency(us) 00:13:35.433 [2024-11-29T18:26:55.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.433 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:35.433 xnvme_bdev : 5.00 32497.57 126.94 0.00 0.00 1965.68 970.44 12653.49 00:13:35.433 [2024-11-29T18:26:55.338Z] =================================================================================================================== 00:13:35.433 [2024-11-29T18:26:55.338Z] Total : 32497.57 126.94 0.00 0.00 1965.68 970.44 12653.49 00:13:35.433 18:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.433 18:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:35.433 18:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.433 18:26:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.433 18:26:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.433 { 00:13:35.433 "subsystems": [ 00:13:35.433 { 00:13:35.433 "subsystem": "bdev", 00:13:35.433 "config": [ 00:13:35.433 { 00:13:35.433 "params": { 00:13:35.433 "io_mechanism": "io_uring", 00:13:35.433 "conserve_cpu": true, 00:13:35.433 "filename": "/dev/nvme0n1", 00:13:35.433 "name": "xnvme_bdev" 00:13:35.433 }, 00:13:35.433 "method": "bdev_xnvme_create" 00:13:35.433 }, 00:13:35.433 { 00:13:35.433 "method": "bdev_wait_for_examine" 00:13:35.433 } 00:13:35.433 ] 00:13:35.433 } 00:13:35.433 ] 00:13:35.433 } 00:13:35.433 [2024-11-29 18:26:55.251085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:35.433 [2024-11-29 18:26:55.251235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82144 ] 00:13:35.693 [2024-11-29 18:26:55.417194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.693 [2024-11-29 18:26:55.445375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.693 Running I/O for 5 seconds... 00:13:37.656 34623.00 IOPS, 135.25 MiB/s [2024-11-29T18:26:58.951Z] 33754.50 IOPS, 131.85 MiB/s [2024-11-29T18:26:59.896Z] 34132.00 IOPS, 133.33 MiB/s [2024-11-29T18:27:00.842Z] 34537.50 IOPS, 134.91 MiB/s [2024-11-29T18:27:00.842Z] 34638.20 IOPS, 135.31 MiB/s 00:13:40.937 Latency(us) 00:13:40.937 [2024-11-29T18:27:00.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.937 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:40.937 xnvme_bdev : 5.00 34636.10 135.30 0.00 0.00 1844.15 762.49 4789.17 00:13:40.937 [2024-11-29T18:27:00.842Z] =================================================================================================================== 00:13:40.937 [2024-11-29T18:27:00.842Z] Total : 34636.10 135.30 0.00 0.00 1844.15 762.49 4789.17 00:13:40.937 00:13:40.937 real 0m11.116s 00:13:40.937 user 0m8.094s 00:13:40.937 sys 0m2.527s 00:13:40.937 ************************************ 00:13:40.937 END TEST xnvme_bdevperf 00:13:40.937 ************************************ 00:13:40.937 18:27:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.937 18:27:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:40.937 18:27:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:40.937 18:27:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.937 18:27:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.937 18:27:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.937 ************************************ 00:13:40.937 START TEST xnvme_fio_plugin 00:13:40.937 ************************************ 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.937 18:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.937 { 00:13:40.937 "subsystems": [ 00:13:40.937 { 00:13:40.937 "subsystem": "bdev", 00:13:40.937 "config": [ 00:13:40.937 { 00:13:40.937 "params": { 00:13:40.938 "io_mechanism": "io_uring", 00:13:40.938 "conserve_cpu": true, 00:13:40.938 "filename": "/dev/nvme0n1", 00:13:40.938 "name": "xnvme_bdev" 00:13:40.938 }, 00:13:40.938 "method": "bdev_xnvme_create" 00:13:40.938 }, 00:13:40.938 { 00:13:40.938 "method": "bdev_wait_for_examine" 00:13:40.938 } 00:13:40.938 ] 00:13:40.938 } 00:13:40.938 ] 00:13:40.938 } 00:13:41.199 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:41.199 fio-3.35 00:13:41.199 Starting 1 thread 00:13:47.790 00:13:47.790 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82247: Fri Nov 29 18:27:06 2024 00:13:47.790 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5001msec) 00:13:47.790 slat (nsec): min=2870, max=88491, avg=3302.79, stdev=1566.60 00:13:47.790 clat (usec): min=1078, max=3601, avg=1761.94, stdev=267.01 00:13:47.790 lat (usec): min=1081, max=3634, avg=1765.25, stdev=267.20 00:13:47.790 clat percentiles (usec): 00:13:47.790 | 1.00th=[ 1270], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:13:47.790 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1811], 00:13:47.790 | 70.00th=[ 1876], 80.00th=[ 1975], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:47.790 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 2966], 99.95th=[ 3326], 00:13:47.790 | 99.99th=[ 3523] 00:13:47.790 bw ( KiB/s): min=123392, max=142336, per=100.00%, avg=135395.56, stdev=6481.96, samples=9 00:13:47.790 iops : min=30848, max=35584, avg=33848.89, stdev=1620.49, samples=9 00:13:47.790 lat (msec) : 2=83.02%, 4=16.98% 00:13:47.790 cpu : usr=70.66%, sys=26.06%, ctx=22, majf=0, minf=771 00:13:47.790 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:47.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.790 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:47.790 issued rwts: total=168960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.790 00:13:47.790 Run status group 0 (all jobs): 00:13:47.790 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5001-5001msec 00:13:47.790 ----------------------------------------------------- 00:13:47.790 Suppressions used: 00:13:47.790 count bytes template 00:13:47.790 1 11 /usr/src/fio/parse.c 00:13:47.790 1 8 libtcmalloc_minimal.so 00:13:47.790 1 904 libcrypto.so 00:13:47.790 ----------------------------------------------------- 00:13:47.790 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:47.790 18:27:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.790 { 00:13:47.790 "subsystems": [ 00:13:47.790 { 00:13:47.790 "subsystem": "bdev", 00:13:47.790 "config": [ 00:13:47.790 { 00:13:47.790 "params": { 00:13:47.790 "io_mechanism": "io_uring", 00:13:47.790 "conserve_cpu": true, 00:13:47.790 "filename": "/dev/nvme0n1", 00:13:47.790 "name": "xnvme_bdev" 00:13:47.790 }, 00:13:47.790 "method": "bdev_xnvme_create" 00:13:47.790 }, 00:13:47.790 { 00:13:47.790 "method": "bdev_wait_for_examine" 00:13:47.790 } 00:13:47.790 ] 00:13:47.790 } 00:13:47.790 ] 00:13:47.790 } 00:13:47.790 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:47.790 fio-3.35 00:13:47.790 Starting 1 thread 00:13:53.082 00:13:53.082 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82333: Fri Nov 29 18:27:12 2024 00:13:53.082 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5001msec); 0 zone resets 00:13:53.082 slat (nsec): min=2901, max=81861, avg=3726.43, stdev=1683.06 00:13:53.082 clat (usec): min=871, max=4098, avg=1755.71, stdev=265.89 00:13:53.082 lat (usec): min=885, max=4137, avg=1759.43, stdev=266.11 00:13:53.082 clat percentiles (usec): 00:13:53.082 | 1.00th=[ 1270], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:13:53.082 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1795], 00:13:53.082 | 70.00th=[ 1876], 80.00th=[ 1958], 90.00th=[ 2089], 95.00th=[ 2212], 00:13:53.082 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 3097], 99.95th=[ 3621], 00:13:53.082 | 99.99th=[ 3982] 00:13:53.082 bw ( KiB/s): min=128702, max=141312, per=99.99%, avg=134534.00, stdev=4087.77, samples=9 00:13:53.082 iops : min=32175, max=35328, avg=33633.44, stdev=1022.03, samples=9 00:13:53.082 lat (usec) : 1000=0.03% 00:13:53.082 lat (msec) : 2=83.54%, 4=16.43%, 10=0.01% 00:13:53.082 cpu : usr=71.56%, sys=25.24%, ctx=11, majf=0, minf=772 00:13:53.082 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:53.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.082 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:53.082 issued rwts: total=0,168212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.082 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:53.082 00:13:53.082 Run status group 0 (all jobs): 00:13:53.082 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5001-5001msec 00:13:53.082 ----------------------------------------------------- 00:13:53.082 Suppressions used: 00:13:53.082 count bytes template 00:13:53.082 1 11 /usr/src/fio/parse.c 00:13:53.082 1 8 libtcmalloc_minimal.so 00:13:53.082 1 904 libcrypto.so 00:13:53.082 ----------------------------------------------------- 00:13:53.082 00:13:53.082 ************************************ 00:13:53.082 END TEST xnvme_fio_plugin 00:13:53.082 ************************************ 00:13:53.082 00:13:53.082 real 0m12.120s 00:13:53.082 user 0m8.321s 00:13:53.082 sys 0m3.150s 00:13:53.082 18:27:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.082 18:27:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:53.082 18:27:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:53.082 18:27:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.082 18:27:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.082 18:27:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 ************************************ 00:13:53.343 START TEST xnvme_rpc 00:13:53.343 ************************************ 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82408 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82408 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82408 ']' 00:13:53.343 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.343 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.343 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.343 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.343 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 18:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.343 [2024-11-29 18:27:13.097654] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:53.343 [2024-11-29 18:27:13.098063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82408 ] 00:13:53.602 [2024-11-29 18:27:13.273142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.602 [2024-11-29 18:27:13.313698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.169 xnvme_bdev 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.169 18:27:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.169 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82408 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82408 ']' 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82408 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82408 00:13:54.445 killing process with pid 82408 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82408' 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82408 00:13:54.445 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82408 00:13:54.705 ************************************ 00:13:54.705 END TEST xnvme_rpc 00:13:54.705 ************************************ 00:13:54.705 00:13:54.705 real 0m1.608s 00:13:54.705 user 0m1.569s 00:13:54.705 sys 0m0.526s 00:13:54.705 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.705 18:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.966 18:27:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:54.966 18:27:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.966 18:27:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.966 18:27:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.966 ************************************ 00:13:54.966 START TEST xnvme_bdevperf 00:13:54.966 ************************************ 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.966 18:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.966 { 00:13:54.966 "subsystems": [ 00:13:54.966 { 00:13:54.966 "subsystem": "bdev", 00:13:54.966 "config": [ 00:13:54.966 { 00:13:54.966 "params": { 00:13:54.966 "io_mechanism": "io_uring_cmd", 00:13:54.966 "conserve_cpu": false, 00:13:54.966 "filename": "/dev/ng0n1", 00:13:54.966 "name": "xnvme_bdev" 00:13:54.966 }, 00:13:54.966 "method": "bdev_xnvme_create" 00:13:54.966 }, 00:13:54.966 { 00:13:54.966 "method": "bdev_wait_for_examine" 00:13:54.966 } 00:13:54.966 ] 00:13:54.966 } 00:13:54.966 ] 00:13:54.966 } 00:13:54.966 [2024-11-29 18:27:14.749650] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:54.966 [2024-11-29 18:27:14.749792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82471 ] 00:13:55.226 [2024-11-29 18:27:14.909675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.226 [2024-11-29 18:27:14.938994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.226 Running I/O for 5 seconds... 00:13:57.553 33857.00 IOPS, 132.25 MiB/s [2024-11-29T18:27:18.400Z] 34037.50 IOPS, 132.96 MiB/s [2024-11-29T18:27:19.344Z] 33683.67 IOPS, 131.58 MiB/s [2024-11-29T18:27:20.314Z] 34572.50 IOPS, 135.05 MiB/s 00:14:00.409 Latency(us) 00:14:00.409 [2024-11-29T18:27:20.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.409 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:00.409 xnvme_bdev : 5.00 35013.87 136.77 0.00 0.00 1824.58 376.52 8318.03 00:14:00.409 [2024-11-29T18:27:20.314Z] =================================================================================================================== 00:14:00.409 [2024-11-29T18:27:20.314Z] Total : 35013.87 136.77 0.00 0.00 1824.58 376.52 8318.03 00:14:00.409 18:27:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.409 18:27:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:00.410 18:27:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.410 18:27:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.410 18:27:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.410 { 00:14:00.410 "subsystems": [ 00:14:00.410 { 00:14:00.410 "subsystem": "bdev", 00:14:00.410 "config": [ 00:14:00.410 { 00:14:00.410 "params": { 00:14:00.410 "io_mechanism": "io_uring_cmd", 00:14:00.410 "conserve_cpu": false, 00:14:00.410 "filename": "/dev/ng0n1", 00:14:00.410 "name": "xnvme_bdev" 00:14:00.410 }, 00:14:00.410 "method": "bdev_xnvme_create" 00:14:00.410 }, 00:14:00.410 { 00:14:00.410 "method": "bdev_wait_for_examine" 00:14:00.410 } 00:14:00.410 ] 00:14:00.410 } 00:14:00.410 ] 00:14:00.410 } 00:14:00.410 [2024-11-29 18:27:20.304829] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:00.410 [2024-11-29 18:27:20.305216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82534 ] 00:14:00.671 [2024-11-29 18:27:20.469960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.671 [2024-11-29 18:27:20.498487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.933 Running I/O for 5 seconds... 00:14:02.817 36909.00 IOPS, 144.18 MiB/s [2024-11-29T18:27:23.665Z] 36942.50 IOPS, 144.31 MiB/s [2024-11-29T18:27:24.610Z] 36883.33 IOPS, 144.08 MiB/s [2024-11-29T18:27:25.999Z] 36861.00 IOPS, 143.99 MiB/s [2024-11-29T18:27:25.999Z] 37034.80 IOPS, 144.67 MiB/s 00:14:06.094 Latency(us) 00:14:06.094 [2024-11-29T18:27:25.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.094 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:06.094 xnvme_bdev : 5.01 37006.32 144.56 0.00 0.00 1725.83 360.76 8721.33 00:14:06.094 [2024-11-29T18:27:25.999Z] =================================================================================================================== 00:14:06.094 [2024-11-29T18:27:25.999Z] Total : 37006.32 144.56 0.00 0.00 1725.83 360.76 8721.33 00:14:06.094 18:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:06.094 18:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:06.094 18:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:06.094 18:27:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:06.094 18:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:06.094 { 00:14:06.094 "subsystems": [ 00:14:06.094 { 00:14:06.094 "subsystem": "bdev", 00:14:06.094 "config": [ 00:14:06.094 { 00:14:06.094 "params": { 00:14:06.094 "io_mechanism": "io_uring_cmd", 00:14:06.094 "conserve_cpu": false, 00:14:06.094 "filename": "/dev/ng0n1", 00:14:06.095 "name": "xnvme_bdev" 00:14:06.095 }, 00:14:06.095 "method": "bdev_xnvme_create" 00:14:06.095 }, 00:14:06.095 { 00:14:06.095 "method": "bdev_wait_for_examine" 00:14:06.095 } 00:14:06.095 ] 00:14:06.095 } 00:14:06.095 ] 00:14:06.095 } 00:14:06.095 [2024-11-29 18:27:25.875248] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:06.095 [2024-11-29 18:27:25.875420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82603 ] 00:14:06.354 [2024-11-29 18:27:26.043640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.354 [2024-11-29 18:27:26.071216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.354 Running I/O for 5 seconds... 00:14:08.691 79104.00 IOPS, 309.00 MiB/s [2024-11-29T18:27:29.539Z] 74400.00 IOPS, 290.62 MiB/s [2024-11-29T18:27:30.483Z] 76266.67 IOPS, 297.92 MiB/s [2024-11-29T18:27:31.426Z] 78784.00 IOPS, 307.75 MiB/s 00:14:11.521 Latency(us) 00:14:11.521 [2024-11-29T18:27:31.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.521 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:11.521 xnvme_bdev : 5.00 80617.64 314.91 0.00 0.00 790.48 463.16 5469.74 00:14:11.521 [2024-11-29T18:27:31.426Z] =================================================================================================================== 00:14:11.521 [2024-11-29T18:27:31.426Z] Total : 80617.64 314.91 0.00 0.00 790.48 463.16 5469.74 00:14:11.521 18:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.521 18:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:11.521 18:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:11.521 18:27:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.521 18:27:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.521 { 00:14:11.521 "subsystems": [ 00:14:11.521 { 00:14:11.521 "subsystem": "bdev", 00:14:11.521 "config": [ 00:14:11.521 { 00:14:11.521 "params": { 00:14:11.521 "io_mechanism": "io_uring_cmd", 00:14:11.521 "conserve_cpu": false, 00:14:11.521 "filename": "/dev/ng0n1", 00:14:11.521 "name": "xnvme_bdev" 00:14:11.521 }, 00:14:11.521 "method": "bdev_xnvme_create" 00:14:11.521 }, 00:14:11.521 { 00:14:11.521 "method": "bdev_wait_for_examine" 00:14:11.521 } 00:14:11.521 ] 00:14:11.521 } 00:14:11.521 ] 00:14:11.521 } 00:14:11.782 [2024-11-29 18:27:31.428567] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:11.782 [2024-11-29 18:27:31.428887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82666 ] 00:14:11.782 [2024-11-29 18:27:31.592291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.782 [2024-11-29 18:27:31.623198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.043 Running I/O for 5 seconds... 00:14:13.929 48688.00 IOPS, 190.19 MiB/s [2024-11-29T18:27:34.777Z] 44983.50 IOPS, 175.72 MiB/s [2024-11-29T18:27:36.158Z] 44284.67 IOPS, 172.99 MiB/s [2024-11-29T18:27:36.730Z] 42894.75 IOPS, 167.56 MiB/s [2024-11-29T18:27:36.730Z] 42044.60 IOPS, 164.24 MiB/s 00:14:16.825 Latency(us) 00:14:16.825 [2024-11-29T18:27:36.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.825 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:16.825 xnvme_bdev : 5.01 42013.03 164.11 0.00 0.00 1519.46 102.40 21979.77 00:14:16.825 [2024-11-29T18:27:36.730Z] =================================================================================================================== 00:14:16.825 [2024-11-29T18:27:36.730Z] Total : 42013.03 164.11 0.00 0.00 1519.46 102.40 21979.77 00:14:17.087 00:14:17.087 real 0m22.220s 00:14:17.087 user 0m11.165s 00:14:17.087 sys 0m10.592s 00:14:17.087 18:27:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.087 18:27:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:17.087 ************************************ 00:14:17.087 END TEST xnvme_bdevperf 00:14:17.087 ************************************ 00:14:17.087 18:27:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:17.087 18:27:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:17.087 18:27:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.087 18:27:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.087 ************************************ 00:14:17.087 START TEST xnvme_fio_plugin 00:14:17.087 ************************************ 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.087 18:27:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:17.349 18:27:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:17.349 18:27:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:17.349 18:27:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:17.349 18:27:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:17.349 18:27:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.349 { 00:14:17.349 "subsystems": [ 00:14:17.349 { 00:14:17.349 "subsystem": "bdev", 00:14:17.349 "config": [ 00:14:17.349 { 00:14:17.349 "params": { 00:14:17.349 "io_mechanism": "io_uring_cmd", 00:14:17.349 "conserve_cpu": false, 00:14:17.349 "filename": "/dev/ng0n1", 00:14:17.349 "name": "xnvme_bdev" 00:14:17.349 }, 00:14:17.349 "method": "bdev_xnvme_create" 00:14:17.349 }, 00:14:17.349 { 00:14:17.349 "method": "bdev_wait_for_examine" 00:14:17.349 } 00:14:17.349 ] 00:14:17.349 } 00:14:17.349 ] 00:14:17.349 } 00:14:17.349 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:17.349 fio-3.35 00:14:17.349 Starting 1 thread 00:14:23.938 00:14:23.938 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82768: Fri Nov 29 18:27:42 2024 00:14:23.938 read: IOPS=42.3k, BW=165MiB/s (173MB/s)(826MiB/5001msec) 00:14:23.938 slat (nsec): min=2129, max=56538, avg=2930.49, stdev=980.77 00:14:23.938 clat (usec): min=662, max=3265, avg=1400.86, stdev=308.48 00:14:23.938 lat (usec): min=664, max=3268, avg=1403.79, stdev=308.64 00:14:23.938 clat percentiles (usec): 00:14:23.938 | 1.00th=[ 865], 5.00th=[ 971], 10.00th=[ 1045], 20.00th=[ 1156], 00:14:23.938 | 30.00th=[ 1221], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1434], 00:14:23.938 | 70.00th=[ 1516], 80.00th=[ 1631], 90.00th=[ 1827], 95.00th=[ 1975], 00:14:23.938 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2704], 99.95th=[ 2802], 00:14:23.938 | 99.99th=[ 3195] 00:14:23.938 bw ( KiB/s): min=142592, max=192000, per=98.56%, avg=166656.00, stdev=15369.60, samples=9 00:14:23.938 iops : min=35648, max=48000, avg=41664.00, stdev=3842.40, samples=9 00:14:23.938 lat (usec) : 750=0.07%, 1000=6.98% 00:14:23.938 lat (msec) : 2=88.37%, 4=4.58% 00:14:23.938 cpu : usr=39.38%, sys=59.84%, ctx=13, majf=0, minf=771 00:14:23.938 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:23.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.938 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:23.938 issued rwts: total=211396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:23.938 00:14:23.938 Run status group 0 (all jobs): 00:14:23.938 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=826MiB (866MB), run=5001-5001msec 00:14:23.938 ----------------------------------------------------- 00:14:23.938 Suppressions used: 00:14:23.938 count bytes template 00:14:23.938 1 11 /usr/src/fio/parse.c 00:14:23.938 1 8 libtcmalloc_minimal.so 00:14:23.938 1 904 libcrypto.so 00:14:23.938 ----------------------------------------------------- 00:14:23.938 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:23.938 18:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.938 { 00:14:23.938 "subsystems": [ 00:14:23.938 { 00:14:23.938 "subsystem": "bdev", 00:14:23.938 "config": [ 00:14:23.938 { 00:14:23.938 "params": { 00:14:23.938 "io_mechanism": "io_uring_cmd", 00:14:23.938 "conserve_cpu": false, 00:14:23.938 "filename": "/dev/ng0n1", 00:14:23.938 "name": "xnvme_bdev" 00:14:23.938 }, 00:14:23.938 "method": "bdev_xnvme_create" 00:14:23.938 }, 00:14:23.938 { 00:14:23.938 "method": "bdev_wait_for_examine" 00:14:23.938 } 00:14:23.938 ] 00:14:23.938 } 00:14:23.938 ] 00:14:23.938 } 00:14:23.938 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:23.938 fio-3.35 00:14:23.938 Starting 1 thread 00:14:29.233 00:14:29.233 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82853: Fri Nov 29 18:27:48 2024 00:14:29.233 write: IOPS=40.3k, BW=157MiB/s (165MB/s)(788MiB/5005msec); 0 zone resets 00:14:29.233 slat (nsec): min=2900, max=55390, avg=3525.02, stdev=1447.99 00:14:29.233 clat (usec): min=146, max=6183, avg=1453.31, stdev=270.16 00:14:29.233 lat (usec): min=150, max=6192, avg=1456.83, stdev=270.41 00:14:29.233 clat percentiles (usec): 00:14:29.233 | 1.00th=[ 996], 5.00th=[ 1139], 10.00th=[ 1188], 20.00th=[ 1254], 00:14:29.233 | 30.00th=[ 1303], 40.00th=[ 1352], 50.00th=[ 1401], 60.00th=[ 1467], 00:14:29.233 | 70.00th=[ 1532], 80.00th=[ 1614], 90.00th=[ 1745], 95.00th=[ 1909], 00:14:29.233 | 99.00th=[ 2343], 99.50th=[ 2573], 99.90th=[ 3163], 99.95th=[ 3752], 00:14:29.233 | 99.99th=[ 5997] 00:14:29.233 bw ( KiB/s): min=140407, max=168600, per=99.90%, avg=161104.78, stdev=8847.00, samples=9 00:14:29.233 iops : min=35101, max=42150, avg=40276.11, stdev=2211.97, samples=9 00:14:29.233 lat (usec) : 250=0.01%, 500=0.01%, 750=0.07%, 1000=0.97% 00:14:29.233 lat (msec) : 2=95.44%, 4=3.46%, 10=0.04% 00:14:29.233 cpu : usr=40.29%, sys=58.73%, ctx=13, majf=0, minf=772 00:14:29.233 IO depths : 1=1.4%, 2=2.9%, 4=5.9%, 8=11.9%, 16=24.0%, 32=52.3%, >=64=1.7% 00:14:29.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.233 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:29.233 issued rwts: total=0,201779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:29.233 00:14:29.233 Run status group 0 (all jobs): 00:14:29.233 WRITE: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=788MiB (826MB), run=5005-5005msec 00:14:29.233 ----------------------------------------------------- 00:14:29.233 Suppressions used: 00:14:29.233 count bytes template 00:14:29.233 1 11 /usr/src/fio/parse.c 00:14:29.233 1 8 libtcmalloc_minimal.so 00:14:29.233 1 904 libcrypto.so 00:14:29.233 ----------------------------------------------------- 00:14:29.233 00:14:29.233 ************************************ 00:14:29.233 END TEST xnvme_fio_plugin 00:14:29.233 ************************************ 00:14:29.233 00:14:29.233 real 0m11.954s 00:14:29.233 user 0m5.081s 00:14:29.233 sys 0m6.471s 00:14:29.233 18:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.233 18:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:29.233 18:27:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:29.233 18:27:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:29.233 18:27:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:29.233 18:27:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:29.233 18:27:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.233 18:27:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.233 18:27:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.233 ************************************ 00:14:29.233 START TEST xnvme_rpc 00:14:29.233 ************************************ 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:29.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82927 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82927 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82927 ']' 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.233 18:27:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.233 [2024-11-29 18:27:49.085720] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:29.233 [2024-11-29 18:27:49.086058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82927 ] 00:14:29.494 [2024-11-29 18:27:49.249185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.494 [2024-11-29 18:27:49.277533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.067 xnvme_bdev 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.067 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.329 18:27:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82927 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82927 ']' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82927 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82927 00:14:30.329 killing process with pid 82927 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82927' 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82927 00:14:30.329 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82927 00:14:30.591 ************************************ 00:14:30.591 END TEST xnvme_rpc 00:14:30.591 ************************************ 00:14:30.591 00:14:30.591 real 0m1.428s 00:14:30.591 user 0m1.510s 00:14:30.591 sys 0m0.416s 00:14:30.591 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.591 18:27:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.591 18:27:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:30.591 18:27:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.591 18:27:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.591 18:27:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.591 ************************************ 00:14:30.591 START TEST xnvme_bdevperf 00:14:30.591 ************************************ 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:30.591 18:27:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.854 { 00:14:30.854 "subsystems": [ 00:14:30.854 { 00:14:30.854 "subsystem": "bdev", 00:14:30.854 "config": [ 00:14:30.854 { 00:14:30.854 "params": { 00:14:30.854 "io_mechanism": "io_uring_cmd", 00:14:30.854 "conserve_cpu": true, 00:14:30.854 "filename": "/dev/ng0n1", 00:14:30.854 "name": "xnvme_bdev" 00:14:30.854 }, 00:14:30.854 "method": "bdev_xnvme_create" 00:14:30.854 }, 00:14:30.854 { 00:14:30.854 "method": "bdev_wait_for_examine" 00:14:30.854 } 00:14:30.854 ] 00:14:30.854 } 00:14:30.854 ] 00:14:30.854 } 00:14:30.854 [2024-11-29 18:27:50.564446] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:30.854 [2024-11-29 18:27:50.564614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82985 ] 00:14:30.854 [2024-11-29 18:27:50.725963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.854 [2024-11-29 18:27:50.758021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.116 Running I/O for 5 seconds... 00:14:33.000 37824.00 IOPS, 147.75 MiB/s [2024-11-29T18:27:53.914Z] 38242.00 IOPS, 149.38 MiB/s [2024-11-29T18:27:55.302Z] 38460.33 IOPS, 150.24 MiB/s [2024-11-29T18:27:55.873Z] 38509.25 IOPS, 150.43 MiB/s 00:14:35.968 Latency(us) 00:14:35.968 [2024-11-29T18:27:55.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.968 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:35.968 xnvme_bdev : 5.00 38515.22 150.45 0.00 0.00 1658.19 875.91 5444.53 00:14:35.968 [2024-11-29T18:27:55.873Z] =================================================================================================================== 00:14:35.968 [2024-11-29T18:27:55.873Z] Total : 38515.22 150.45 0.00 0.00 1658.19 875.91 5444.53 00:14:36.230 18:27:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.230 18:27:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:36.230 18:27:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.230 18:27:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.230 18:27:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.230 { 00:14:36.230 "subsystems": [ 00:14:36.230 { 00:14:36.230 "subsystem": "bdev", 00:14:36.230 "config": [ 00:14:36.230 { 00:14:36.230 "params": { 00:14:36.230 "io_mechanism": "io_uring_cmd", 00:14:36.230 "conserve_cpu": true, 00:14:36.230 "filename": "/dev/ng0n1", 00:14:36.230 "name": "xnvme_bdev" 00:14:36.230 }, 00:14:36.230 "method": "bdev_xnvme_create" 00:14:36.230 }, 00:14:36.230 { 00:14:36.230 "method": "bdev_wait_for_examine" 00:14:36.230 } 00:14:36.230 ] 00:14:36.230 } 00:14:36.230 ] 00:14:36.230 } 00:14:36.230 [2024-11-29 18:27:56.124990] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:36.230 [2024-11-29 18:27:56.125399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83048 ] 00:14:36.492 [2024-11-29 18:27:56.295980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.492 [2024-11-29 18:27:56.325994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.754 Running I/O for 5 seconds... 00:14:38.641 40568.00 IOPS, 158.47 MiB/s [2024-11-29T18:27:59.489Z] 40337.00 IOPS, 157.57 MiB/s [2024-11-29T18:28:00.879Z] 38653.67 IOPS, 150.99 MiB/s [2024-11-29T18:28:01.450Z] 37838.25 IOPS, 147.81 MiB/s 00:14:41.545 Latency(us) 00:14:41.545 [2024-11-29T18:28:01.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.545 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:41.545 xnvme_bdev : 5.00 37289.87 145.66 0.00 0.00 1712.25 759.34 6301.54 00:14:41.545 [2024-11-29T18:28:01.450Z] =================================================================================================================== 00:14:41.545 [2024-11-29T18:28:01.450Z] Total : 37289.87 145.66 0.00 0.00 1712.25 759.34 6301.54 00:14:41.804 18:28:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.804 18:28:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:41.804 18:28:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.804 18:28:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.804 18:28:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.804 { 00:14:41.804 "subsystems": [ 00:14:41.804 { 00:14:41.804 "subsystem": "bdev", 00:14:41.804 "config": [ 00:14:41.804 { 00:14:41.804 "params": { 00:14:41.804 "io_mechanism": "io_uring_cmd", 00:14:41.804 "conserve_cpu": true, 00:14:41.804 "filename": "/dev/ng0n1", 00:14:41.804 "name": "xnvme_bdev" 00:14:41.804 }, 00:14:41.804 "method": "bdev_xnvme_create" 00:14:41.804 }, 00:14:41.804 { 00:14:41.804 "method": "bdev_wait_for_examine" 00:14:41.804 } 00:14:41.804 ] 00:14:41.804 } 00:14:41.804 ] 00:14:41.804 } 00:14:41.804 [2024-11-29 18:28:01.706150] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:41.804 [2024-11-29 18:28:01.706291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83117 ] 00:14:42.065 [2024-11-29 18:28:01.866079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.065 [2024-11-29 18:28:01.894995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.326 Running I/O for 5 seconds... 00:14:44.212 76288.00 IOPS, 298.00 MiB/s [2024-11-29T18:28:05.060Z] 78144.00 IOPS, 305.25 MiB/s [2024-11-29T18:28:06.442Z] 78805.33 IOPS, 307.83 MiB/s [2024-11-29T18:28:07.014Z] 79136.00 IOPS, 309.12 MiB/s [2024-11-29T18:28:07.014Z] 80870.40 IOPS, 315.90 MiB/s 00:14:47.109 Latency(us) 00:14:47.109 [2024-11-29T18:28:07.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.109 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:47.109 xnvme_bdev : 5.00 80853.34 315.83 0.00 0.00 788.13 390.70 2697.06 00:14:47.109 [2024-11-29T18:28:07.014Z] =================================================================================================================== 00:14:47.109 [2024-11-29T18:28:07.014Z] Total : 80853.34 315.83 0.00 0.00 788.13 390.70 2697.06 00:14:47.369 18:28:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.369 18:28:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:47.369 18:28:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.369 18:28:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.369 18:28:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.369 { 00:14:47.369 "subsystems": [ 00:14:47.369 { 00:14:47.369 "subsystem": "bdev", 00:14:47.369 "config": [ 00:14:47.369 { 00:14:47.369 "params": { 00:14:47.369 "io_mechanism": "io_uring_cmd", 00:14:47.369 "conserve_cpu": true, 00:14:47.369 "filename": "/dev/ng0n1", 00:14:47.369 "name": "xnvme_bdev" 00:14:47.369 }, 00:14:47.369 "method": "bdev_xnvme_create" 00:14:47.369 }, 00:14:47.369 { 00:14:47.369 "method": "bdev_wait_for_examine" 00:14:47.369 } 00:14:47.369 ] 00:14:47.369 } 00:14:47.369 ] 00:14:47.369 } 00:14:47.369 [2024-11-29 18:28:07.248251] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:47.370 [2024-11-29 18:28:07.248377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83180 ] 00:14:47.630 [2024-11-29 18:28:07.407350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.630 [2024-11-29 18:28:07.430303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.630 Running I/O for 5 seconds... 00:14:49.959 43686.00 IOPS, 170.65 MiB/s [2024-11-29T18:28:10.806Z] 42811.00 IOPS, 167.23 MiB/s [2024-11-29T18:28:11.751Z] 44662.33 IOPS, 174.46 MiB/s [2024-11-29T18:28:12.695Z] 43716.50 IOPS, 170.77 MiB/s [2024-11-29T18:28:12.695Z] 42857.00 IOPS, 167.41 MiB/s 00:14:52.790 Latency(us) 00:14:52.790 [2024-11-29T18:28:12.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.790 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:52.790 xnvme_bdev : 5.00 42828.13 167.30 0.00 0.00 1488.83 73.26 19156.68 00:14:52.790 [2024-11-29T18:28:12.695Z] =================================================================================================================== 00:14:52.790 [2024-11-29T18:28:12.695Z] Total : 42828.13 167.30 0.00 0.00 1488.83 73.26 19156.68 00:14:53.052 00:14:53.052 real 0m22.309s 00:14:53.052 user 0m15.202s 00:14:53.052 sys 0m5.025s 00:14:53.052 18:28:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.052 18:28:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.052 ************************************ 00:14:53.052 END TEST xnvme_bdevperf 00:14:53.052 ************************************ 00:14:53.052 18:28:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:53.052 18:28:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.052 18:28:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.052 18:28:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.052 ************************************ 00:14:53.052 START TEST xnvme_fio_plugin 00:14:53.052 ************************************ 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:53.052 18:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.052 { 00:14:53.052 "subsystems": [ 00:14:53.052 { 00:14:53.052 "subsystem": "bdev", 00:14:53.052 "config": [ 00:14:53.052 { 00:14:53.052 "params": { 00:14:53.052 "io_mechanism": "io_uring_cmd", 00:14:53.052 "conserve_cpu": true, 00:14:53.052 "filename": "/dev/ng0n1", 00:14:53.052 "name": "xnvme_bdev" 00:14:53.052 }, 00:14:53.052 "method": "bdev_xnvme_create" 00:14:53.052 }, 00:14:53.052 { 00:14:53.052 "method": "bdev_wait_for_examine" 00:14:53.052 } 00:14:53.052 ] 00:14:53.052 } 00:14:53.052 ] 00:14:53.052 } 00:14:53.313 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:53.313 fio-3.35 00:14:53.313 Starting 1 thread 00:14:59.915 00:14:59.915 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83288: Fri Nov 29 18:28:18 2024 00:14:59.915 read: IOPS=42.1k, BW=164MiB/s (172MB/s)(823MiB/5002msec) 00:14:59.915 slat (usec): min=2, max=102, avg= 3.09, stdev= 1.10 00:14:59.915 clat (usec): min=816, max=3787, avg=1398.73, stdev=286.78 00:14:59.915 lat (usec): min=819, max=3825, avg=1401.82, stdev=286.87 00:14:59.915 clat percentiles (usec): 00:14:59.915 | 1.00th=[ 963], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:14:59.915 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1401], 00:14:59.915 | 70.00th=[ 1500], 80.00th=[ 1631], 90.00th=[ 1811], 95.00th=[ 1958], 00:14:59.915 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2638], 99.95th=[ 2737], 00:14:59.915 | 99.99th=[ 3556] 00:14:59.915 bw ( KiB/s): min=162816, max=176128, per=99.98%, avg=168391.11, stdev=4853.51, samples=9 00:14:59.915 iops : min=40704, max=44032, avg=42097.78, stdev=1213.38, samples=9 00:14:59.915 lat (usec) : 1000=2.15% 00:14:59.915 lat (msec) : 2=93.88%, 4=3.97% 00:14:59.915 cpu : usr=85.18%, sys=12.46%, ctx=9, majf=0, minf=771 00:14:59.915 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:59.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.915 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:59.915 issued rwts: total=210624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.915 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:59.915 00:14:59.915 Run status group 0 (all jobs): 00:14:59.915 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=823MiB (863MB), run=5002-5002msec 00:14:59.915 ----------------------------------------------------- 00:14:59.915 Suppressions used: 00:14:59.915 count bytes template 00:14:59.915 1 11 /usr/src/fio/parse.c 00:14:59.915 1 8 libtcmalloc_minimal.so 00:14:59.915 1 904 libcrypto.so 00:14:59.915 ----------------------------------------------------- 00:14:59.915 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.915 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.916 18:28:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.916 18:28:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.916 18:28:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.916 18:28:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.916 18:28:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.916 18:28:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.916 { 00:14:59.916 "subsystems": [ 00:14:59.916 { 00:14:59.916 "subsystem": "bdev", 00:14:59.916 "config": [ 00:14:59.916 { 00:14:59.916 "params": { 00:14:59.916 "io_mechanism": "io_uring_cmd", 00:14:59.916 "conserve_cpu": true, 00:14:59.916 "filename": "/dev/ng0n1", 00:14:59.916 "name": "xnvme_bdev" 00:14:59.916 }, 00:14:59.916 "method": "bdev_xnvme_create" 00:14:59.916 }, 00:14:59.916 { 00:14:59.916 "method": "bdev_wait_for_examine" 00:14:59.916 } 00:14:59.916 ] 00:14:59.916 } 00:14:59.916 ] 00:14:59.916 } 00:14:59.916 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.916 fio-3.35 00:14:59.916 Starting 1 thread 00:15:05.216 00:15:05.216 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83372: Fri Nov 29 18:28:24 2024 00:15:05.216 write: IOPS=40.0k, BW=156MiB/s (164MB/s)(782MiB/5001msec); 0 zone resets 00:15:05.216 slat (usec): min=2, max=129, avg= 3.90, stdev= 2.09 00:15:05.216 clat (usec): min=363, max=5404, avg=1443.22, stdev=280.88 00:15:05.216 lat (usec): min=366, max=5412, avg=1447.12, stdev=281.45 00:15:05.216 clat percentiles (usec): 00:15:05.216 | 1.00th=[ 1029], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1205], 00:15:05.216 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1401], 60.00th=[ 1467], 00:15:05.216 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:05.216 | 99.00th=[ 2278], 99.50th=[ 2474], 99.90th=[ 3032], 99.95th=[ 3687], 00:15:05.216 | 99.99th=[ 4359] 00:15:05.216 bw ( KiB/s): min=145968, max=178040, per=99.14%, avg=158784.89, stdev=10476.76, samples=9 00:15:05.216 iops : min=36492, max=44510, avg=39696.22, stdev=2619.19, samples=9 00:15:05.216 lat (usec) : 500=0.01%, 1000=0.42% 00:15:05.216 lat (msec) : 2=95.83%, 4=3.72%, 10=0.03% 00:15:05.216 cpu : usr=59.42%, sys=35.72%, ctx=8, majf=0, minf=772 00:15:05.216 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:05.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.216 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:05.216 issued rwts: total=0,200245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:05.216 00:15:05.216 Run status group 0 (all jobs): 00:15:05.216 WRITE: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=782MiB (820MB), run=5001-5001msec 00:15:05.216 ----------------------------------------------------- 00:15:05.216 Suppressions used: 00:15:05.216 count bytes template 00:15:05.216 1 11 /usr/src/fio/parse.c 00:15:05.216 1 8 libtcmalloc_minimal.so 00:15:05.216 1 904 libcrypto.so 00:15:05.216 ----------------------------------------------------- 00:15:05.216 00:15:05.216 ************************************ 00:15:05.216 END TEST xnvme_fio_plugin 00:15:05.216 ************************************ 00:15:05.216 00:15:05.216 real 0m12.213s 00:15:05.216 user 0m8.457s 00:15:05.216 sys 0m3.082s 00:15:05.216 18:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.216 18:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:05.476 Process with pid 82927 is not found 00:15:05.476 18:28:25 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82927 00:15:05.476 18:28:25 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82927 ']' 00:15:05.476 18:28:25 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82927 00:15:05.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82927) - No such process 00:15:05.476 18:28:25 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82927 is not found' 00:15:05.476 ************************************ 00:15:05.476 END TEST nvme_xnvme 00:15:05.476 ************************************ 00:15:05.476 00:15:05.476 real 2m58.816s 00:15:05.476 user 1m32.496s 00:15:05.476 sys 1m12.550s 00:15:05.476 18:28:25 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.476 18:28:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.476 18:28:25 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:05.476 18:28:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:05.476 18:28:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.476 18:28:25 -- common/autotest_common.sh@10 -- # set +x 00:15:05.476 ************************************ 00:15:05.476 START TEST blockdev_xnvme 00:15:05.476 ************************************ 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:05.476 * Looking for test storage... 00:15:05.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.476 18:28:25 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.476 --rc genhtml_branch_coverage=1 00:15:05.476 --rc genhtml_function_coverage=1 00:15:05.476 --rc genhtml_legend=1 00:15:05.476 --rc geninfo_all_blocks=1 00:15:05.476 --rc geninfo_unexecuted_blocks=1 00:15:05.476 00:15:05.476 ' 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.476 --rc genhtml_branch_coverage=1 00:15:05.476 --rc genhtml_function_coverage=1 00:15:05.476 --rc genhtml_legend=1 00:15:05.476 --rc geninfo_all_blocks=1 00:15:05.476 --rc geninfo_unexecuted_blocks=1 00:15:05.476 00:15:05.476 ' 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.476 --rc genhtml_branch_coverage=1 00:15:05.476 --rc genhtml_function_coverage=1 00:15:05.476 --rc genhtml_legend=1 00:15:05.476 --rc geninfo_all_blocks=1 00:15:05.476 --rc geninfo_unexecuted_blocks=1 00:15:05.476 00:15:05.476 ' 00:15:05.476 18:28:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.476 --rc genhtml_branch_coverage=1 00:15:05.476 --rc genhtml_function_coverage=1 00:15:05.476 --rc genhtml_legend=1 00:15:05.476 --rc geninfo_all_blocks=1 00:15:05.476 --rc geninfo_unexecuted_blocks=1 00:15:05.476 00:15:05.476 ' 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:05.476 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83502 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83502 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83502 ']' 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.738 18:28:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:05.738 18:28:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.738 [2024-11-29 18:28:25.476053] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:05.738 [2024-11-29 18:28:25.476995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83502 ] 00:15:06.000 [2024-11-29 18:28:25.646786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.000 [2024-11-29 18:28:25.688124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.573 18:28:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.573 18:28:26 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:06.573 18:28:26 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:06.573 18:28:26 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:06.573 18:28:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:06.573 18:28:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:06.573 18:28:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:07.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.747 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:07.747 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:07.747 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:07.747 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:07.747 nvme0n1 00:15:07.747 nvme0n2 00:15:07.747 nvme0n3 00:15:07.747 nvme1n1 00:15:07.747 nvme2n1 00:15:07.747 nvme3n1 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.747 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.747 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.748 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.748 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:07.748 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:07.748 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:07.748 18:28:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.748 18:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0658035f-8b04-41c1-a691-de65c5a25688"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0658035f-8b04-41c1-a691-de65c5a25688",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5fec22a1-faa5-4ac3-be9b-cb2dcaf0e48c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5fec22a1-faa5-4ac3-be9b-cb2dcaf0e48c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "67f604d4-5a00-4878-9445-986bdb225882"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67f604d4-5a00-4878-9445-986bdb225882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2c0fd819-d4f7-4b82-b2be-c628337fc08e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c0fd819-d4f7-4b82-b2be-c628337fc08e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d51bc708-865c-454a-b6a6-a1bd9489ba1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d51bc708-865c-454a-b6a6-a1bd9489ba1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1d572bba-bec0-4971-983f-dffb87a55cb3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1d572bba-bec0-4971-983f-dffb87a55cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:08.010 18:28:27 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83502 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83502 ']' 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83502 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83502 00:15:08.010 killing process with pid 83502 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.010 18:28:27 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83502' 00:15:08.011 18:28:27 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83502 00:15:08.011 18:28:27 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83502 00:15:08.585 18:28:28 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:08.585 18:28:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:08.585 18:28:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:08.585 18:28:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.585 18:28:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.585 ************************************ 00:15:08.585 START TEST bdev_hello_world 00:15:08.585 ************************************ 00:15:08.585 18:28:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:08.585 [2024-11-29 18:28:28.321748] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:08.585 [2024-11-29 18:28:28.321907] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83770 ] 00:15:08.585 [2024-11-29 18:28:28.488479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.847 [2024-11-29 18:28:28.529684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.109 [2024-11-29 18:28:28.794877] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:09.109 [2024-11-29 18:28:28.794950] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:09.109 [2024-11-29 18:28:28.794973] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:09.109 [2024-11-29 18:28:28.797443] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:09.109 [2024-11-29 18:28:28.798202] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:09.109 [2024-11-29 18:28:28.798245] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:09.109 [2024-11-29 18:28:28.798693] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:09.109 00:15:09.109 [2024-11-29 18:28:28.798730] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:09.370 00:15:09.370 real 0m0.811s 00:15:09.370 user 0m0.406s 00:15:09.370 sys 0m0.258s 00:15:09.370 ************************************ 00:15:09.370 END TEST bdev_hello_world 00:15:09.370 ************************************ 00:15:09.370 18:28:29 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.370 18:28:29 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:09.370 18:28:29 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:09.370 18:28:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.371 18:28:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.371 18:28:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.371 ************************************ 00:15:09.371 START TEST bdev_bounds 00:15:09.371 ************************************ 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:09.371 Process bdevio pid: 83800 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83800 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83800' 00:15:09.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83800 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83800 ']' 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:09.371 18:28:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:09.371 [2024-11-29 18:28:29.206409] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:09.371 [2024-11-29 18:28:29.206575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83800 ] 00:15:09.632 [2024-11-29 18:28:29.371360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.632 [2024-11-29 18:28:29.415749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.632 [2024-11-29 18:28:29.416270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.632 [2024-11-29 18:28:29.416298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.205 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.205 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:10.205 18:28:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:10.467 I/O targets: 00:15:10.467 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.467 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.467 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.467 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:10.467 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:10.467 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:10.467 00:15:10.467 00:15:10.467 CUnit - A unit testing framework for C - Version 2.1-3 00:15:10.467 http://cunit.sourceforge.net/ 00:15:10.467 00:15:10.467 00:15:10.467 Suite: bdevio tests on: nvme3n1 00:15:10.467 Test: blockdev write read block ...passed 00:15:10.467 Test: blockdev write zeroes read block ...passed 00:15:10.467 Test: blockdev write zeroes read no split ...passed 00:15:10.467 Test: blockdev write zeroes read split ...passed 00:15:10.467 Test: blockdev write zeroes read split partial ...passed 00:15:10.467 Test: blockdev reset ...passed 00:15:10.467 Test: blockdev write read 8 blocks ...passed 00:15:10.467 Test: blockdev write read size > 128k ...passed 00:15:10.467 Test: blockdev write read invalid size ...passed 00:15:10.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.467 Test: blockdev write read max offset ...passed 00:15:10.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.467 Test: blockdev writev readv 8 blocks ...passed 00:15:10.467 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.467 Test: blockdev writev readv block ...passed 00:15:10.467 Test: blockdev writev readv size > 128k ...passed 00:15:10.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.467 Test: blockdev comparev and writev ...passed 00:15:10.467 Test: blockdev nvme passthru rw ...passed 00:15:10.467 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.467 Test: blockdev nvme admin passthru ...passed 00:15:10.467 Test: blockdev copy ...passed 00:15:10.467 Suite: bdevio tests on: nvme2n1 00:15:10.467 Test: blockdev write read block ...passed 00:15:10.467 Test: blockdev write zeroes read block ...passed 00:15:10.467 Test: blockdev write zeroes read no split ...passed 00:15:10.467 Test: blockdev write zeroes read split ...passed 00:15:10.467 Test: blockdev write zeroes read split partial ...passed 00:15:10.467 Test: blockdev reset ...passed 00:15:10.467 Test: blockdev write read 8 blocks ...passed 00:15:10.467 Test: blockdev write read size > 128k ...passed 00:15:10.467 Test: blockdev write read invalid size ...passed 00:15:10.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.467 Test: blockdev write read max offset ...passed 00:15:10.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.467 Test: blockdev writev readv 8 blocks ...passed 00:15:10.467 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.467 Test: blockdev writev readv block ...passed 00:15:10.467 Test: blockdev writev readv size > 128k ...passed 00:15:10.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.467 Test: blockdev comparev and writev ...passed 00:15:10.467 Test: blockdev nvme passthru rw ...passed 00:15:10.467 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.467 Test: blockdev nvme admin passthru ...passed 00:15:10.467 Test: blockdev copy ...passed 00:15:10.467 Suite: bdevio tests on: nvme1n1 00:15:10.467 Test: blockdev write read block ...passed 00:15:10.467 Test: blockdev write zeroes read block ...passed 00:15:10.467 Test: blockdev write zeroes read no split ...passed 00:15:10.467 Test: blockdev write zeroes read split ...passed 00:15:10.467 Test: blockdev write zeroes read split partial ...passed 00:15:10.467 Test: blockdev reset ...passed 00:15:10.467 Test: blockdev write read 8 blocks ...passed 00:15:10.467 Test: blockdev write read size > 128k ...passed 00:15:10.467 Test: blockdev write read invalid size ...passed 00:15:10.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.467 Test: blockdev write read max offset ...passed 00:15:10.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.467 Test: blockdev writev readv 8 blocks ...passed 00:15:10.467 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.467 Test: blockdev writev readv block ...passed 00:15:10.467 Test: blockdev writev readv size > 128k ...passed 00:15:10.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.467 Test: blockdev comparev and writev ...passed 00:15:10.467 Test: blockdev nvme passthru rw ...passed 00:15:10.467 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.467 Test: blockdev nvme admin passthru ...passed 00:15:10.467 Test: blockdev copy ...passed 00:15:10.467 Suite: bdevio tests on: nvme0n3 00:15:10.467 Test: blockdev write read block ...passed 00:15:10.467 Test: blockdev write zeroes read block ...passed 00:15:10.467 Test: blockdev write zeroes read no split ...passed 00:15:10.467 Test: blockdev write zeroes read split ...passed 00:15:10.467 Test: blockdev write zeroes read split partial ...passed 00:15:10.467 Test: blockdev reset ...passed 00:15:10.467 Test: blockdev write read 8 blocks ...passed 00:15:10.467 Test: blockdev write read size > 128k ...passed 00:15:10.467 Test: blockdev write read invalid size ...passed 00:15:10.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.467 Test: blockdev write read max offset ...passed 00:15:10.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.467 Test: blockdev writev readv 8 blocks ...passed 00:15:10.467 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.467 Test: blockdev writev readv block ...passed 00:15:10.467 Test: blockdev writev readv size > 128k ...passed 00:15:10.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.467 Test: blockdev comparev and writev ...passed 00:15:10.467 Test: blockdev nvme passthru rw ...passed 00:15:10.467 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.467 Test: blockdev nvme admin passthru ...passed 00:15:10.467 Test: blockdev copy ...passed 00:15:10.467 Suite: bdevio tests on: nvme0n2 00:15:10.467 Test: blockdev write read block ...passed 00:15:10.467 Test: blockdev write zeroes read block ...passed 00:15:10.467 Test: blockdev write zeroes read no split ...passed 00:15:10.467 Test: blockdev write zeroes read split ...passed 00:15:10.467 Test: blockdev write zeroes read split partial ...passed 00:15:10.467 Test: blockdev reset ...passed 00:15:10.467 Test: blockdev write read 8 blocks ...passed 00:15:10.467 Test: blockdev write read size > 128k ...passed 00:15:10.729 Test: blockdev write read invalid size ...passed 00:15:10.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.729 Test: blockdev write read max offset ...passed 00:15:10.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.729 Test: blockdev writev readv 8 blocks ...passed 00:15:10.729 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.729 Test: blockdev writev readv block ...passed 00:15:10.729 Test: blockdev writev readv size > 128k ...passed 00:15:10.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.729 Test: blockdev comparev and writev ...passed 00:15:10.729 Test: blockdev nvme passthru rw ...passed 00:15:10.729 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.729 Test: blockdev nvme admin passthru ...passed 00:15:10.729 Test: blockdev copy ...passed 00:15:10.729 Suite: bdevio tests on: nvme0n1 00:15:10.729 Test: blockdev write read block ...passed 00:15:10.729 Test: blockdev write zeroes read block ...passed 00:15:10.729 Test: blockdev write zeroes read no split ...passed 00:15:10.729 Test: blockdev write zeroes read split ...passed 00:15:10.729 Test: blockdev write zeroes read split partial ...passed 00:15:10.729 Test: blockdev reset ...passed 00:15:10.729 Test: blockdev write read 8 blocks ...passed 00:15:10.729 Test: blockdev write read size > 128k ...passed 00:15:10.729 Test: blockdev write read invalid size ...passed 00:15:10.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.729 Test: blockdev write read max offset ...passed 00:15:10.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.729 Test: blockdev writev readv 8 blocks ...passed 00:15:10.729 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.729 Test: blockdev writev readv block ...passed 00:15:10.729 Test: blockdev writev readv size > 128k ...passed 00:15:10.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.729 Test: blockdev comparev and writev ...passed 00:15:10.729 Test: blockdev nvme passthru rw ...passed 00:15:10.729 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.729 Test: blockdev nvme admin passthru ...passed 00:15:10.729 Test: blockdev copy ...passed 00:15:10.729 00:15:10.729 Run Summary: Type Total Ran Passed Failed Inactive 00:15:10.729 suites 6 6 n/a 0 0 00:15:10.729 tests 138 138 138 0 0 00:15:10.729 asserts 780 780 780 0 n/a 00:15:10.729 00:15:10.729 Elapsed time = 0.634 seconds 00:15:10.729 0 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83800 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83800 ']' 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83800 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83800 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83800' 00:15:10.729 killing process with pid 83800 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83800 00:15:10.729 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83800 00:15:10.992 18:28:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:10.992 00:15:10.992 real 0m1.637s 00:15:10.992 user 0m3.873s 00:15:10.992 sys 0m0.399s 00:15:10.992 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.992 ************************************ 00:15:10.992 18:28:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:10.992 END TEST bdev_bounds 00:15:10.992 ************************************ 00:15:10.992 18:28:30 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:10.992 18:28:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:10.992 18:28:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.992 18:28:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.992 ************************************ 00:15:10.992 START TEST bdev_nbd 00:15:10.992 ************************************ 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83850 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83850 /var/tmp/spdk-nbd.sock 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83850 ']' 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.992 18:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [2024-11-29 18:28:30.921037] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:11.254 [2024-11-29 18:28:30.921716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.254 [2024-11-29 18:28:31.088444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.254 [2024-11-29 18:28:31.126787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.198 18:28:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.198 18:28:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:12.198 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:12.198 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.199 18:28:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.199 1+0 records in 00:15:12.199 1+0 records out 00:15:12.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00430424 s, 952 kB/s 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.199 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.460 1+0 records in 00:15:12.460 1+0 records out 00:15:12.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722643 s, 5.7 MB/s 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.460 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.461 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:12.461 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.461 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.461 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.722 1+0 records in 00:15:12.722 1+0 records out 00:15:12.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978483 s, 4.2 MB/s 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.722 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.984 1+0 records in 00:15:12.984 1+0 records out 00:15:12.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110768 s, 3.7 MB/s 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.984 18:28:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.246 1+0 records in 00:15:13.246 1+0 records out 00:15:13.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163908 s, 2.5 MB/s 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:13.246 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.508 1+0 records in 00:15:13.508 1+0 records out 00:15:13.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108577 s, 3.8 MB/s 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:13.508 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:13.769 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:13.769 { 00:15:13.769 "nbd_device": "/dev/nbd0", 00:15:13.769 "bdev_name": "nvme0n1" 00:15:13.769 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd1", 00:15:13.770 "bdev_name": "nvme0n2" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd2", 00:15:13.770 "bdev_name": "nvme0n3" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd3", 00:15:13.770 "bdev_name": "nvme1n1" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd4", 00:15:13.770 "bdev_name": "nvme2n1" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd5", 00:15:13.770 "bdev_name": "nvme3n1" 00:15:13.770 } 00:15:13.770 ]' 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd0", 00:15:13.770 "bdev_name": "nvme0n1" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd1", 00:15:13.770 "bdev_name": "nvme0n2" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd2", 00:15:13.770 "bdev_name": "nvme0n3" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd3", 00:15:13.770 "bdev_name": "nvme1n1" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd4", 00:15:13.770 "bdev_name": "nvme2n1" 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "nbd_device": "/dev/nbd5", 00:15:13.770 "bdev_name": "nvme3n1" 00:15:13.770 } 00:15:13.770 ]' 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.770 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.062 18:28:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.324 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.586 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.846 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.107 18:28:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.378 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.379 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:15.640 /dev/nbd0 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.640 1+0 records in 00:15:15.640 1+0 records out 00:15:15.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922288 s, 4.4 MB/s 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.640 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:15.900 /dev/nbd1 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.900 1+0 records in 00:15:15.900 1+0 records out 00:15:15.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870692 s, 4.7 MB/s 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.900 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:16.162 /dev/nbd10 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.162 1+0 records in 00:15:16.162 1+0 records out 00:15:16.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129097 s, 3.2 MB/s 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:16.162 18:28:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:16.423 /dev/nbd11 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.423 1+0 records in 00:15:16.423 1+0 records out 00:15:16.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122202 s, 3.4 MB/s 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:16.423 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:16.685 /dev/nbd12 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.685 1+0 records in 00:15:16.685 1+0 records out 00:15:16.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00221998 s, 1.8 MB/s 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:16.685 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:16.945 /dev/nbd13 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.945 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.946 1+0 records in 00:15:16.946 1+0 records out 00:15:16.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806647 s, 5.1 MB/s 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.946 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd0", 00:15:17.205 "bdev_name": "nvme0n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd1", 00:15:17.205 "bdev_name": "nvme0n2" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd10", 00:15:17.205 "bdev_name": "nvme0n3" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd11", 00:15:17.205 "bdev_name": "nvme1n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd12", 00:15:17.205 "bdev_name": "nvme2n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd13", 00:15:17.205 "bdev_name": "nvme3n1" 00:15:17.205 } 00:15:17.205 ]' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd0", 00:15:17.205 "bdev_name": "nvme0n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd1", 00:15:17.205 "bdev_name": "nvme0n2" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd10", 00:15:17.205 "bdev_name": "nvme0n3" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd11", 00:15:17.205 "bdev_name": "nvme1n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd12", 00:15:17.205 "bdev_name": "nvme2n1" 00:15:17.205 }, 00:15:17.205 { 00:15:17.205 "nbd_device": "/dev/nbd13", 00:15:17.205 "bdev_name": "nvme3n1" 00:15:17.205 } 00:15:17.205 ]' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:17.205 /dev/nbd1 00:15:17.205 /dev/nbd10 00:15:17.205 /dev/nbd11 00:15:17.205 /dev/nbd12 00:15:17.205 /dev/nbd13' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:17.205 /dev/nbd1 00:15:17.205 /dev/nbd10 00:15:17.205 /dev/nbd11 00:15:17.205 /dev/nbd12 00:15:17.205 /dev/nbd13' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:17.205 256+0 records in 00:15:17.205 256+0 records out 00:15:17.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110541 s, 94.9 MB/s 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.205 18:28:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:17.466 256+0 records in 00:15:17.466 256+0 records out 00:15:17.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205086 s, 5.1 MB/s 00:15:17.466 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.466 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:17.726 256+0 records in 00:15:17.726 256+0 records out 00:15:17.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251817 s, 4.2 MB/s 00:15:17.726 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.726 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:17.988 256+0 records in 00:15:17.988 256+0 records out 00:15:17.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22082 s, 4.7 MB/s 00:15:17.988 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.988 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:18.250 256+0 records in 00:15:18.250 256+0 records out 00:15:18.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.213511 s, 4.9 MB/s 00:15:18.250 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:18.250 18:28:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:18.511 256+0 records in 00:15:18.511 256+0 records out 00:15:18.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.295856 s, 3.5 MB/s 00:15:18.511 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:18.511 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:18.774 256+0 records in 00:15:18.774 256+0 records out 00:15:18.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241972 s, 4.3 MB/s 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.774 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.033 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.291 18:28:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.291 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.549 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.808 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.066 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.324 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:20.324 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:20.324 18:28:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:20.324 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:20.582 malloc_lvol_verify 00:15:20.582 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:20.582 62420a7c-ecd2-4803-9b1a-d51f7bfd1e85 00:15:20.582 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:20.840 c31bbd10-73ca-44d6-992c-ef7227378890 00:15:20.840 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:21.099 /dev/nbd0 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:21.099 mke2fs 1.47.0 (5-Feb-2023) 00:15:21.099 Discarding device blocks: 0/4096 done 00:15:21.099 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:21.099 00:15:21.099 Allocating group tables: 0/1 done 00:15:21.099 Writing inode tables: 0/1 done 00:15:21.099 Creating journal (1024 blocks): done 00:15:21.099 Writing superblocks and filesystem accounting information: 0/1 done 00:15:21.099 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.099 18:28:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83850 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83850 ']' 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83850 00:15:21.359 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83850 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.360 killing process with pid 83850 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83850' 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83850 00:15:21.360 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83850 00:15:21.619 18:28:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:21.619 00:15:21.619 real 0m10.458s 00:15:21.619 user 0m14.078s 00:15:21.619 sys 0m3.920s 00:15:21.619 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.619 ************************************ 00:15:21.619 END TEST bdev_nbd 00:15:21.619 ************************************ 00:15:21.619 18:28:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:21.619 18:28:41 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:21.619 18:28:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:21.619 18:28:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:21.619 18:28:41 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:21.619 18:28:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.619 18:28:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.619 18:28:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.619 ************************************ 00:15:21.619 START TEST bdev_fio 00:15:21.619 ************************************ 00:15:21.619 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:21.619 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:21.620 ************************************ 00:15:21.620 START TEST bdev_fio_rw_verify 00:15:21.620 ************************************ 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.620 18:28:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:21.878 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:21.878 fio-3.35 00:15:21.878 Starting 6 threads 00:15:34.105 00:15:34.105 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=84265: Fri Nov 29 18:28:53 2024 00:15:34.105 read: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(644MiB/10002msec) 00:15:34.105 slat (usec): min=2, max=2209, avg= 6.76, stdev=17.30 00:15:34.105 clat (usec): min=62, max=6927, avg=1170.46, stdev=772.38 00:15:34.105 lat (usec): min=66, max=6932, avg=1177.22, stdev=773.37 00:15:34.105 clat percentiles (usec): 00:15:34.105 | 50.000th=[ 1057], 99.000th=[ 3523], 99.900th=[ 4883], 99.990th=[ 6390], 00:15:34.105 | 99.999th=[ 6915] 00:15:34.105 write: IOPS=16.8k, BW=65.4MiB/s (68.6MB/s)(655MiB/10002msec); 0 zone resets 00:15:34.105 slat (usec): min=13, max=5131, avg=38.47, stdev=124.67 00:15:34.105 clat (usec): min=74, max=8368, avg=1398.94, stdev=813.59 00:15:34.105 lat (usec): min=94, max=8385, avg=1437.41, stdev=825.64 00:15:34.105 clat percentiles (usec): 00:15:34.105 | 50.000th=[ 1303], 99.000th=[ 3851], 99.900th=[ 5211], 99.990th=[ 6783], 00:15:34.105 | 99.999th=[ 8291] 00:15:34.105 bw ( KiB/s): min=48635, max=139429, per=100.00%, avg=67894.53, stdev=4120.42, samples=114 00:15:34.105 iops : min=12155, max=34856, avg=16972.68, stdev=1030.15, samples=114 00:15:34.105 lat (usec) : 100=0.02%, 250=5.40%, 500=11.55%, 750=12.06%, 1000=12.12% 00:15:34.105 lat (msec) : 2=41.90%, 4=16.36%, 10=0.60% 00:15:34.105 cpu : usr=43.68%, sys=33.03%, ctx=5806, majf=0, minf=16122 00:15:34.105 IO depths : 1=11.5%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.105 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.105 issued rwts: total=164823,167553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.105 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:34.105 00:15:34.105 Run status group 0 (all jobs): 00:15:34.105 READ: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=644MiB (675MB), run=10002-10002msec 00:15:34.105 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=655MiB (686MB), run=10002-10002msec 00:15:34.105 ----------------------------------------------------- 00:15:34.105 Suppressions used: 00:15:34.105 count bytes template 00:15:34.105 6 48 /usr/src/fio/parse.c 00:15:34.105 2623 251808 /usr/src/fio/iolog.c 00:15:34.105 1 8 libtcmalloc_minimal.so 00:15:34.105 1 904 libcrypto.so 00:15:34.105 ----------------------------------------------------- 00:15:34.105 00:15:34.105 00:15:34.105 real 0m12.237s 00:15:34.105 user 0m26.933s 00:15:34.105 sys 0m20.126s 00:15:34.105 18:28:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.105 18:28:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:34.105 ************************************ 00:15:34.105 END TEST bdev_fio_rw_verify 00:15:34.105 ************************************ 00:15:34.105 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:34.105 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.105 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0658035f-8b04-41c1-a691-de65c5a25688"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0658035f-8b04-41c1-a691-de65c5a25688",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5fec22a1-faa5-4ac3-be9b-cb2dcaf0e48c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5fec22a1-faa5-4ac3-be9b-cb2dcaf0e48c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "67f604d4-5a00-4878-9445-986bdb225882"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67f604d4-5a00-4878-9445-986bdb225882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2c0fd819-d4f7-4b82-b2be-c628337fc08e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c0fd819-d4f7-4b82-b2be-c628337fc08e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d51bc708-865c-454a-b6a6-a1bd9489ba1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d51bc708-865c-454a-b6a6-a1bd9489ba1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1d572bba-bec0-4971-983f-dffb87a55cb3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1d572bba-bec0-4971-983f-dffb87a55cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.106 /home/vagrant/spdk_repo/spdk 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:34.106 00:15:34.106 real 0m12.402s 00:15:34.106 user 0m27.002s 00:15:34.106 sys 0m20.208s 00:15:34.106 ************************************ 00:15:34.106 END TEST bdev_fio 00:15:34.106 ************************************ 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.106 18:28:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 18:28:53 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:34.106 18:28:53 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:34.106 18:28:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:34.106 18:28:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.106 18:28:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 ************************************ 00:15:34.106 START TEST bdev_verify 00:15:34.106 ************************************ 00:15:34.106 18:28:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:34.106 [2024-11-29 18:28:53.900282] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:34.106 [2024-11-29 18:28:53.900428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84435 ] 00:15:34.366 [2024-11-29 18:28:54.061916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.366 [2024-11-29 18:28:54.094234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.366 [2024-11-29 18:28:54.094315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.626 Running I/O for 5 seconds... 00:15:36.947 26176.00 IOPS, 102.25 MiB/s [2024-11-29T18:28:57.793Z] 24400.00 IOPS, 95.31 MiB/s [2024-11-29T18:28:58.733Z] 24170.67 IOPS, 94.42 MiB/s [2024-11-29T18:28:59.676Z] 24232.00 IOPS, 94.66 MiB/s [2024-11-29T18:28:59.676Z] 23692.80 IOPS, 92.55 MiB/s 00:15:39.771 Latency(us) 00:15:39.771 [2024-11-29T18:28:59.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.771 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0x80000 00:15:39.771 nvme0n1 : 5.05 1747.44 6.83 0.00 0.00 73097.31 8771.74 68964.04 00:15:39.771 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x80000 length 0x80000 00:15:39.771 nvme0n1 : 5.04 1955.84 7.64 0.00 0.00 65324.47 7713.08 58881.58 00:15:39.771 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0x80000 00:15:39.771 nvme0n2 : 5.06 1746.04 6.82 0.00 0.00 72961.77 16031.11 69367.34 00:15:39.771 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x80000 length 0x80000 00:15:39.771 nvme0n2 : 5.05 1926.07 7.52 0.00 0.00 66217.37 8721.33 56865.08 00:15:39.771 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0x80000 00:15:39.771 nvme0n3 : 5.07 1767.04 6.90 0.00 0.00 71916.87 10435.35 66140.95 00:15:39.771 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x80000 length 0x80000 00:15:39.771 nvme0n3 : 5.06 1924.26 7.52 0.00 0.00 66153.97 9880.81 62511.26 00:15:39.771 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0x20000 00:15:39.771 nvme1n1 : 5.07 1743.48 6.81 0.00 0.00 72700.16 7914.73 74610.22 00:15:39.771 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x20000 length 0x20000 00:15:39.771 nvme1n1 : 5.04 1929.30 7.54 0.00 0.00 65857.74 10082.46 63317.86 00:15:39.771 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0xbd0bd 00:15:39.771 nvme2n1 : 5.08 2444.26 9.55 0.00 0.00 51633.37 5595.77 52428.80 00:15:39.771 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:39.771 nvme2n1 : 5.06 2573.31 10.05 0.00 0.00 49218.85 6427.57 55655.19 00:15:39.771 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0x0 length 0xa0000 00:15:39.771 nvme3n1 : 5.07 1765.85 6.90 0.00 0.00 71485.52 7259.37 72593.72 00:15:39.771 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.771 Verification LBA range: start 0xa0000 length 0xa0000 00:15:39.771 nvme3n1 : 5.07 1970.45 7.70 0.00 0.00 64254.56 4738.76 57671.68 00:15:39.771 [2024-11-29T18:28:59.676Z] =================================================================================================================== 00:15:39.771 [2024-11-29T18:28:59.676Z] Total : 23493.34 91.77 0.00 0.00 64878.88 4738.76 74610.22 00:15:40.033 00:15:40.033 real 0m5.853s 00:15:40.033 user 0m9.173s 00:15:40.033 sys 0m1.608s 00:15:40.033 18:28:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.033 ************************************ 00:15:40.033 END TEST bdev_verify 00:15:40.033 ************************************ 00:15:40.033 18:28:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:40.033 18:28:59 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:40.033 18:28:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:40.033 18:28:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.033 18:28:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.033 ************************************ 00:15:40.033 START TEST bdev_verify_big_io 00:15:40.033 ************************************ 00:15:40.033 18:28:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:40.033 [2024-11-29 18:28:59.831291] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:40.033 [2024-11-29 18:28:59.831442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84521 ] 00:15:40.294 [2024-11-29 18:28:59.993405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:40.294 [2024-11-29 18:29:00.027712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.294 [2024-11-29 18:29:00.027783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.600 Running I/O for 5 seconds... 00:15:46.838 2298.00 IOPS, 143.62 MiB/s [2024-11-29T18:29:06.743Z] 3238.00 IOPS, 202.38 MiB/s [2024-11-29T18:29:07.005Z] 2799.67 IOPS, 174.98 MiB/s 00:15:47.100 Latency(us) 00:15:47.100 [2024-11-29T18:29:07.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.100 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0x8000 00:15:47.100 nvme0n1 : 6.26 81.77 5.11 0.00 0.00 1508541.83 75416.81 1677721.60 00:15:47.100 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x8000 length 0x8000 00:15:47.100 nvme0n1 : 5.96 118.08 7.38 0.00 0.00 1041904.39 36901.81 1780966.01 00:15:47.100 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0x8000 00:15:47.100 nvme0n2 : 6.07 84.36 5.27 0.00 0.00 1397550.47 337157.51 1355082.83 00:15:47.100 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x8000 length 0x8000 00:15:47.100 nvme0n2 : 6.09 110.34 6.90 0.00 0.00 1075672.02 59688.17 1535760.54 00:15:47.100 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0x8000 00:15:47.100 nvme0n3 : 6.17 93.37 5.84 0.00 0.00 1185381.70 89532.26 1716438.25 00:15:47.100 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x8000 length 0x8000 00:15:47.100 nvme0n3 : 6.07 121.26 7.58 0.00 0.00 938827.86 129055.51 1271196.75 00:15:47.100 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0x2000 00:15:47.100 nvme1n1 : 6.17 80.04 5.00 0.00 0.00 1316426.59 37506.76 3381254.30 00:15:47.100 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x2000 length 0x2000 00:15:47.100 nvme1n1 : 6.09 147.01 9.19 0.00 0.00 771235.35 5847.83 1219574.55 00:15:47.100 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0xbd0b 00:15:47.100 nvme2n1 : 6.26 99.63 6.23 0.00 0.00 1019795.42 5494.94 3510309.81 00:15:47.100 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:47.100 nvme2n1 : 6.10 116.93 7.31 0.00 0.00 938470.15 53235.40 2064888.12 00:15:47.100 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0x0 length 0xa000 00:15:47.100 nvme3n1 : 6.39 170.25 10.64 0.00 0.00 570876.19 1190.99 1284102.30 00:15:47.100 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.100 Verification LBA range: start 0xa000 length 0xa000 00:15:47.100 nvme3n1 : 6.10 142.94 8.93 0.00 0.00 748050.44 5570.56 845313.58 00:15:47.100 [2024-11-29T18:29:07.005Z] =================================================================================================================== 00:15:47.100 [2024-11-29T18:29:07.005Z] Total : 1365.99 85.37 0.00 0.00 980862.62 1190.99 3510309.81 00:15:47.361 00:15:47.361 real 0m7.288s 00:15:47.361 user 0m13.411s 00:15:47.361 sys 0m0.445s 00:15:47.361 18:29:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.361 ************************************ 00:15:47.361 END TEST bdev_verify_big_io 00:15:47.361 ************************************ 00:15:47.361 18:29:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.361 18:29:07 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.361 18:29:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:47.361 18:29:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.361 18:29:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.361 ************************************ 00:15:47.361 START TEST bdev_write_zeroes 00:15:47.361 ************************************ 00:15:47.361 18:29:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.361 [2024-11-29 18:29:07.189200] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:47.361 [2024-11-29 18:29:07.189349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84630 ] 00:15:47.622 [2024-11-29 18:29:07.353188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.622 [2024-11-29 18:29:07.393035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.884 Running I/O for 1 seconds... 00:15:48.827 74304.00 IOPS, 290.25 MiB/s 00:15:48.827 Latency(us) 00:15:48.827 [2024-11-29T18:29:08.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.827 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.827 nvme0n1 : 1.02 12200.39 47.66 0.00 0.00 10480.75 7561.85 20164.92 00:15:48.827 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.827 nvme0n2 : 1.02 12186.32 47.60 0.00 0.00 10483.55 7561.85 21374.82 00:15:48.827 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.828 nvme0n3 : 1.02 12172.74 47.55 0.00 0.00 10486.31 7561.85 21273.99 00:15:48.828 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.828 nvme1n1 : 1.02 12159.26 47.50 0.00 0.00 10489.10 7561.85 21273.99 00:15:48.828 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.828 nvme2n1 : 1.03 12654.97 49.43 0.00 0.00 10068.61 4486.70 20568.22 00:15:48.828 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:48.828 nvme3n1 : 1.02 12269.40 47.93 0.00 0.00 10302.60 3932.16 19055.85 00:15:48.828 [2024-11-29T18:29:08.733Z] =================================================================================================================== 00:15:48.828 [2024-11-29T18:29:08.733Z] Total : 73643.09 287.67 0.00 0.00 10382.59 3932.16 21374.82 00:15:49.088 00:15:49.088 real 0m1.710s 00:15:49.088 user 0m1.007s 00:15:49.088 sys 0m0.514s 00:15:49.088 18:29:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.088 ************************************ 00:15:49.088 END TEST bdev_write_zeroes 00:15:49.088 ************************************ 00:15:49.088 18:29:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 18:29:08 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.088 18:29:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:49.088 18:29:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.088 18:29:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 ************************************ 00:15:49.088 START TEST bdev_json_nonenclosed 00:15:49.088 ************************************ 00:15:49.088 18:29:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.088 [2024-11-29 18:29:08.960599] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:49.088 [2024-11-29 18:29:08.960738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84662 ] 00:15:49.349 [2024-11-29 18:29:09.127735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.349 [2024-11-29 18:29:09.156985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.349 [2024-11-29 18:29:09.157094] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:49.350 [2024-11-29 18:29:09.157112] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:49.350 [2024-11-29 18:29:09.157125] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:49.350 00:15:49.350 real 0m0.348s 00:15:49.350 user 0m0.123s 00:15:49.350 sys 0m0.120s 00:15:49.350 18:29:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.350 ************************************ 00:15:49.350 18:29:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:49.350 END TEST bdev_json_nonenclosed 00:15:49.350 ************************************ 00:15:49.610 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.610 18:29:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:49.610 18:29:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.610 18:29:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.610 ************************************ 00:15:49.610 START TEST bdev_json_nonarray 00:15:49.610 ************************************ 00:15:49.610 18:29:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.610 [2024-11-29 18:29:09.394818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:49.611 [2024-11-29 18:29:09.394953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84689 ] 00:15:49.872 [2024-11-29 18:29:09.559708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.872 [2024-11-29 18:29:09.587767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.872 [2024-11-29 18:29:09.587879] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:49.872 [2024-11-29 18:29:09.587899] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:49.872 [2024-11-29 18:29:09.587912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:49.872 00:15:49.872 real 0m0.351s 00:15:49.872 user 0m0.138s 00:15:49.872 sys 0m0.105s 00:15:49.872 18:29:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.872 ************************************ 00:15:49.872 END TEST bdev_json_nonarray 00:15:49.872 ************************************ 00:15:49.872 18:29:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:49.872 18:29:09 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:50.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.748 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.748 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.748 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.009 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.009 00:15:54.009 real 0m48.561s 00:15:54.009 user 1m13.049s 00:15:54.009 sys 0m35.199s 00:15:54.009 18:29:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.009 18:29:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.009 ************************************ 00:15:54.009 END TEST blockdev_xnvme 00:15:54.009 ************************************ 00:15:54.009 18:29:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:54.009 18:29:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.009 18:29:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.009 18:29:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.009 ************************************ 00:15:54.009 START TEST ublk 00:15:54.009 ************************************ 00:15:54.009 18:29:13 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:54.009 * Looking for test storage... 00:15:54.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:54.269 18:29:13 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.269 18:29:13 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.269 18:29:13 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.269 18:29:13 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.269 18:29:13 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.269 18:29:13 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.269 18:29:13 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.269 18:29:13 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.270 18:29:13 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.270 18:29:13 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.270 18:29:13 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.270 18:29:13 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:54.270 18:29:13 ublk -- scripts/common.sh@345 -- # : 1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.270 18:29:13 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.270 18:29:13 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@353 -- # local d=1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.270 18:29:13 ublk -- scripts/common.sh@355 -- # echo 1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.270 18:29:13 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@353 -- # local d=2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.270 18:29:13 ublk -- scripts/common.sh@355 -- # echo 2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.270 18:29:13 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.270 18:29:13 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.270 18:29:13 ublk -- scripts/common.sh@368 -- # return 0 00:15:54.270 18:29:13 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.270 18:29:13 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.270 --rc genhtml_branch_coverage=1 00:15:54.270 --rc genhtml_function_coverage=1 00:15:54.270 --rc genhtml_legend=1 00:15:54.270 --rc geninfo_all_blocks=1 00:15:54.270 --rc geninfo_unexecuted_blocks=1 00:15:54.270 00:15:54.270 ' 00:15:54.270 18:29:13 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.270 --rc genhtml_branch_coverage=1 00:15:54.270 --rc genhtml_function_coverage=1 00:15:54.270 --rc genhtml_legend=1 00:15:54.270 --rc geninfo_all_blocks=1 00:15:54.270 --rc geninfo_unexecuted_blocks=1 00:15:54.270 00:15:54.270 ' 00:15:54.270 18:29:13 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.270 --rc genhtml_branch_coverage=1 00:15:54.270 --rc genhtml_function_coverage=1 00:15:54.270 --rc genhtml_legend=1 00:15:54.270 --rc geninfo_all_blocks=1 00:15:54.270 --rc geninfo_unexecuted_blocks=1 00:15:54.270 00:15:54.270 ' 00:15:54.270 18:29:13 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.270 --rc genhtml_branch_coverage=1 00:15:54.270 --rc genhtml_function_coverage=1 00:15:54.270 --rc genhtml_legend=1 00:15:54.270 --rc geninfo_all_blocks=1 00:15:54.270 --rc geninfo_unexecuted_blocks=1 00:15:54.270 00:15:54.270 ' 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:54.270 18:29:13 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:54.270 18:29:13 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:54.270 18:29:13 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:54.270 18:29:13 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:54.270 18:29:13 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:54.270 18:29:13 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:54.270 18:29:13 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:54.270 18:29:13 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:54.270 18:29:13 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:54.270 18:29:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:54.270 18:29:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.270 18:29:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.270 18:29:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.270 ************************************ 00:15:54.270 START TEST test_save_ublk_config 00:15:54.270 ************************************ 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84974 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84974 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84974 ']' 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.270 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:54.270 [2024-11-29 18:29:14.116007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:54.270 [2024-11-29 18:29:14.116158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84974 ] 00:15:54.531 [2024-11-29 18:29:14.280087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.531 [2024-11-29 18:29:14.309018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.103 18:29:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:55.103 [2024-11-29 18:29:14.963485] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:55.103 [2024-11-29 18:29:14.964403] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:55.103 malloc0 00:15:55.103 [2024-11-29 18:29:14.995612] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:55.103 [2024-11-29 18:29:14.995718] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:55.104 [2024-11-29 18:29:14.995727] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:55.104 [2024-11-29 18:29:14.995742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:55.104 [2024-11-29 18:29:15.004573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:55.104 [2024-11-29 18:29:15.004619] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:55.364 [2024-11-29 18:29:15.011500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:55.364 [2024-11-29 18:29:15.011644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:55.364 [2024-11-29 18:29:15.028480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:55.364 0 00:15:55.364 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.365 18:29:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:55.365 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.365 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:55.626 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.626 18:29:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:55.626 "subsystems": [ 00:15:55.626 { 00:15:55.626 "subsystem": "fsdev", 00:15:55.626 "config": [ 00:15:55.626 { 00:15:55.626 "method": "fsdev_set_opts", 00:15:55.626 "params": { 00:15:55.626 "fsdev_io_pool_size": 65535, 00:15:55.626 "fsdev_io_cache_size": 256 00:15:55.626 } 00:15:55.626 } 00:15:55.626 ] 00:15:55.626 }, 00:15:55.626 { 00:15:55.626 "subsystem": "keyring", 00:15:55.626 "config": [] 00:15:55.626 }, 00:15:55.626 { 00:15:55.626 "subsystem": "iobuf", 00:15:55.626 "config": [ 00:15:55.627 { 00:15:55.627 "method": "iobuf_set_options", 00:15:55.627 "params": { 00:15:55.627 "small_pool_count": 8192, 00:15:55.627 "large_pool_count": 1024, 00:15:55.627 "small_bufsize": 8192, 00:15:55.627 "large_bufsize": 135168, 00:15:55.627 "enable_numa": false 00:15:55.627 } 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "sock", 00:15:55.627 "config": [ 00:15:55.627 { 00:15:55.627 "method": "sock_set_default_impl", 00:15:55.627 "params": { 00:15:55.627 "impl_name": "posix" 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "sock_impl_set_options", 00:15:55.627 "params": { 00:15:55.627 "impl_name": "ssl", 00:15:55.627 "recv_buf_size": 4096, 00:15:55.627 "send_buf_size": 4096, 00:15:55.627 "enable_recv_pipe": true, 00:15:55.627 "enable_quickack": false, 00:15:55.627 "enable_placement_id": 0, 00:15:55.627 "enable_zerocopy_send_server": true, 00:15:55.627 "enable_zerocopy_send_client": false, 00:15:55.627 "zerocopy_threshold": 0, 00:15:55.627 "tls_version": 0, 00:15:55.627 "enable_ktls": false 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "sock_impl_set_options", 00:15:55.627 "params": { 00:15:55.627 "impl_name": "posix", 00:15:55.627 "recv_buf_size": 2097152, 00:15:55.627 "send_buf_size": 2097152, 00:15:55.627 "enable_recv_pipe": true, 00:15:55.627 "enable_quickack": false, 00:15:55.627 "enable_placement_id": 0, 00:15:55.627 "enable_zerocopy_send_server": true, 00:15:55.627 "enable_zerocopy_send_client": false, 00:15:55.627 "zerocopy_threshold": 0, 00:15:55.627 "tls_version": 0, 00:15:55.627 "enable_ktls": false 00:15:55.627 } 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "vmd", 00:15:55.627 "config": [] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "accel", 00:15:55.627 "config": [ 00:15:55.627 { 00:15:55.627 "method": "accel_set_options", 00:15:55.627 "params": { 00:15:55.627 "small_cache_size": 128, 00:15:55.627 "large_cache_size": 16, 00:15:55.627 "task_count": 2048, 00:15:55.627 "sequence_count": 2048, 00:15:55.627 "buf_count": 2048 00:15:55.627 } 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "bdev", 00:15:55.627 "config": [ 00:15:55.627 { 00:15:55.627 "method": "bdev_set_options", 00:15:55.627 "params": { 00:15:55.627 "bdev_io_pool_size": 65535, 00:15:55.627 "bdev_io_cache_size": 256, 00:15:55.627 "bdev_auto_examine": true, 00:15:55.627 "iobuf_small_cache_size": 128, 00:15:55.627 "iobuf_large_cache_size": 16 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_raid_set_options", 00:15:55.627 "params": { 00:15:55.627 "process_window_size_kb": 1024, 00:15:55.627 "process_max_bandwidth_mb_sec": 0 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_iscsi_set_options", 00:15:55.627 "params": { 00:15:55.627 "timeout_sec": 30 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_nvme_set_options", 00:15:55.627 "params": { 00:15:55.627 "action_on_timeout": "none", 00:15:55.627 "timeout_us": 0, 00:15:55.627 "timeout_admin_us": 0, 00:15:55.627 "keep_alive_timeout_ms": 10000, 00:15:55.627 "arbitration_burst": 0, 00:15:55.627 "low_priority_weight": 0, 00:15:55.627 "medium_priority_weight": 0, 00:15:55.627 "high_priority_weight": 0, 00:15:55.627 "nvme_adminq_poll_period_us": 10000, 00:15:55.627 "nvme_ioq_poll_period_us": 0, 00:15:55.627 "io_queue_requests": 0, 00:15:55.627 "delay_cmd_submit": true, 00:15:55.627 "transport_retry_count": 4, 00:15:55.627 "bdev_retry_count": 3, 00:15:55.627 "transport_ack_timeout": 0, 00:15:55.627 "ctrlr_loss_timeout_sec": 0, 00:15:55.627 "reconnect_delay_sec": 0, 00:15:55.627 "fast_io_fail_timeout_sec": 0, 00:15:55.627 "disable_auto_failback": false, 00:15:55.627 "generate_uuids": false, 00:15:55.627 "transport_tos": 0, 00:15:55.627 "nvme_error_stat": false, 00:15:55.627 "rdma_srq_size": 0, 00:15:55.627 "io_path_stat": false, 00:15:55.627 "allow_accel_sequence": false, 00:15:55.627 "rdma_max_cq_size": 0, 00:15:55.627 "rdma_cm_event_timeout_ms": 0, 00:15:55.627 "dhchap_digests": [ 00:15:55.627 "sha256", 00:15:55.627 "sha384", 00:15:55.627 "sha512" 00:15:55.627 ], 00:15:55.627 "dhchap_dhgroups": [ 00:15:55.627 "null", 00:15:55.627 "ffdhe2048", 00:15:55.627 "ffdhe3072", 00:15:55.627 "ffdhe4096", 00:15:55.627 "ffdhe6144", 00:15:55.627 "ffdhe8192" 00:15:55.627 ] 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_nvme_set_hotplug", 00:15:55.627 "params": { 00:15:55.627 "period_us": 100000, 00:15:55.627 "enable": false 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_malloc_create", 00:15:55.627 "params": { 00:15:55.627 "name": "malloc0", 00:15:55.627 "num_blocks": 8192, 00:15:55.627 "block_size": 4096, 00:15:55.627 "physical_block_size": 4096, 00:15:55.627 "uuid": "01a2d74f-360a-4db1-b274-12b1208d5701", 00:15:55.627 "optimal_io_boundary": 0, 00:15:55.627 "md_size": 0, 00:15:55.627 "dif_type": 0, 00:15:55.627 "dif_is_head_of_md": false, 00:15:55.627 "dif_pi_format": 0 00:15:55.627 } 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "method": "bdev_wait_for_examine" 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "scsi", 00:15:55.627 "config": null 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "scheduler", 00:15:55.627 "config": [ 00:15:55.627 { 00:15:55.627 "method": "framework_set_scheduler", 00:15:55.627 "params": { 00:15:55.627 "name": "static" 00:15:55.627 } 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "vhost_scsi", 00:15:55.627 "config": [] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "vhost_blk", 00:15:55.627 "config": [] 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "subsystem": "ublk", 00:15:55.628 "config": [ 00:15:55.628 { 00:15:55.628 "method": "ublk_create_target", 00:15:55.628 "params": { 00:15:55.628 "cpumask": "1" 00:15:55.628 } 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "method": "ublk_start_disk", 00:15:55.628 "params": { 00:15:55.628 "bdev_name": "malloc0", 00:15:55.628 "ublk_id": 0, 00:15:55.628 "num_queues": 1, 00:15:55.628 "queue_depth": 128 00:15:55.628 } 00:15:55.628 } 00:15:55.628 ] 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "subsystem": "nbd", 00:15:55.628 "config": [] 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "subsystem": "nvmf", 00:15:55.628 "config": [ 00:15:55.628 { 00:15:55.628 "method": "nvmf_set_config", 00:15:55.628 "params": { 00:15:55.628 "discovery_filter": "match_any", 00:15:55.628 "admin_cmd_passthru": { 00:15:55.628 "identify_ctrlr": false 00:15:55.628 }, 00:15:55.628 "dhchap_digests": [ 00:15:55.628 "sha256", 00:15:55.628 "sha384", 00:15:55.628 "sha512" 00:15:55.628 ], 00:15:55.628 "dhchap_dhgroups": [ 00:15:55.628 "null", 00:15:55.628 "ffdhe2048", 00:15:55.628 "ffdhe3072", 00:15:55.628 "ffdhe4096", 00:15:55.628 "ffdhe6144", 00:15:55.628 "ffdhe8192" 00:15:55.628 ] 00:15:55.628 } 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "method": "nvmf_set_max_subsystems", 00:15:55.628 "params": { 00:15:55.628 "max_subsystems": 1024 00:15:55.628 } 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "method": "nvmf_set_crdt", 00:15:55.628 "params": { 00:15:55.628 "crdt1": 0, 00:15:55.628 "crdt2": 0, 00:15:55.628 "crdt3": 0 00:15:55.628 } 00:15:55.628 } 00:15:55.628 ] 00:15:55.628 }, 00:15:55.628 { 00:15:55.628 "subsystem": "iscsi", 00:15:55.628 "config": [ 00:15:55.628 { 00:15:55.628 "method": "iscsi_set_options", 00:15:55.628 "params": { 00:15:55.628 "node_base": "iqn.2016-06.io.spdk", 00:15:55.628 "max_sessions": 128, 00:15:55.628 "max_connections_per_session": 2, 00:15:55.628 "max_queue_depth": 64, 00:15:55.628 "default_time2wait": 2, 00:15:55.628 "default_time2retain": 20, 00:15:55.628 "first_burst_length": 8192, 00:15:55.628 "immediate_data": true, 00:15:55.628 "allow_duplicated_isid": false, 00:15:55.628 "error_recovery_level": 0, 00:15:55.628 "nop_timeout": 60, 00:15:55.628 "nop_in_interval": 30, 00:15:55.628 "disable_chap": false, 00:15:55.628 "require_chap": false, 00:15:55.628 "mutual_chap": false, 00:15:55.628 "chap_group": 0, 00:15:55.628 "max_large_datain_per_connection": 64, 00:15:55.628 "max_r2t_per_connection": 4, 00:15:55.628 "pdu_pool_size": 36864, 00:15:55.628 "immediate_data_pool_size": 16384, 00:15:55.628 "data_out_pool_size": 2048 00:15:55.628 } 00:15:55.628 } 00:15:55.628 ] 00:15:55.628 } 00:15:55.628 ] 00:15:55.628 }' 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84974 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84974 ']' 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84974 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84974 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.628 killing process with pid 84974 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84974' 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84974 00:15:55.628 18:29:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84974 00:15:55.890 [2024-11-29 18:29:15.644206] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:55.890 [2024-11-29 18:29:15.681595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:55.890 [2024-11-29 18:29:15.681751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:55.890 [2024-11-29 18:29:15.689499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:55.890 [2024-11-29 18:29:15.689568] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:55.890 [2024-11-29 18:29:15.689577] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:55.890 [2024-11-29 18:29:15.689603] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:55.890 [2024-11-29 18:29:15.689756] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=85013 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 85013 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85013 ']' 00:15:56.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:56.464 18:29:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:56.464 "subsystems": [ 00:15:56.464 { 00:15:56.464 "subsystem": "fsdev", 00:15:56.464 "config": [ 00:15:56.464 { 00:15:56.464 "method": "fsdev_set_opts", 00:15:56.464 "params": { 00:15:56.464 "fsdev_io_pool_size": 65535, 00:15:56.464 "fsdev_io_cache_size": 256 00:15:56.464 } 00:15:56.464 } 00:15:56.464 ] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "keyring", 00:15:56.464 "config": [] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "iobuf", 00:15:56.464 "config": [ 00:15:56.464 { 00:15:56.464 "method": "iobuf_set_options", 00:15:56.464 "params": { 00:15:56.464 "small_pool_count": 8192, 00:15:56.464 "large_pool_count": 1024, 00:15:56.464 "small_bufsize": 8192, 00:15:56.464 "large_bufsize": 135168, 00:15:56.464 "enable_numa": false 00:15:56.464 } 00:15:56.464 } 00:15:56.464 ] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "sock", 00:15:56.464 "config": [ 00:15:56.464 { 00:15:56.464 "method": "sock_set_default_impl", 00:15:56.464 "params": { 00:15:56.464 "impl_name": "posix" 00:15:56.464 } 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "method": "sock_impl_set_options", 00:15:56.464 "params": { 00:15:56.464 "impl_name": "ssl", 00:15:56.464 "recv_buf_size": 4096, 00:15:56.464 "send_buf_size": 4096, 00:15:56.464 "enable_recv_pipe": true, 00:15:56.464 "enable_quickack": false, 00:15:56.464 "enable_placement_id": 0, 00:15:56.464 "enable_zerocopy_send_server": true, 00:15:56.464 "enable_zerocopy_send_client": false, 00:15:56.464 "zerocopy_threshold": 0, 00:15:56.464 "tls_version": 0, 00:15:56.464 "enable_ktls": false 00:15:56.464 } 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "method": "sock_impl_set_options", 00:15:56.464 "params": { 00:15:56.464 "impl_name": "posix", 00:15:56.464 "recv_buf_size": 2097152, 00:15:56.464 "send_buf_size": 2097152, 00:15:56.464 "enable_recv_pipe": true, 00:15:56.464 "enable_quickack": false, 00:15:56.464 "enable_placement_id": 0, 00:15:56.464 "enable_zerocopy_send_server": true, 00:15:56.464 "enable_zerocopy_send_client": false, 00:15:56.464 "zerocopy_threshold": 0, 00:15:56.464 "tls_version": 0, 00:15:56.464 "enable_ktls": false 00:15:56.464 } 00:15:56.464 } 00:15:56.464 ] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "vmd", 00:15:56.464 "config": [] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "accel", 00:15:56.464 "config": [ 00:15:56.464 { 00:15:56.464 "method": "accel_set_options", 00:15:56.464 "params": { 00:15:56.464 "small_cache_size": 128, 00:15:56.464 "large_cache_size": 16, 00:15:56.464 "task_count": 2048, 00:15:56.464 "sequence_count": 2048, 00:15:56.464 "buf_count": 2048 00:15:56.464 } 00:15:56.464 } 00:15:56.464 ] 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "subsystem": "bdev", 00:15:56.464 "config": [ 00:15:56.464 { 00:15:56.464 "method": "bdev_set_options", 00:15:56.464 "params": { 00:15:56.464 "bdev_io_pool_size": 65535, 00:15:56.464 "bdev_io_cache_size": 256, 00:15:56.464 "bdev_auto_examine": true, 00:15:56.464 "iobuf_small_cache_size": 128, 00:15:56.464 "iobuf_large_cache_size": 16 00:15:56.464 } 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "method": "bdev_raid_set_options", 00:15:56.464 "params": { 00:15:56.464 "process_window_size_kb": 1024, 00:15:56.464 "process_max_bandwidth_mb_sec": 0 00:15:56.464 } 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "method": "bdev_iscsi_set_options", 00:15:56.464 "params": { 00:15:56.464 "timeout_sec": 30 00:15:56.464 } 00:15:56.464 }, 00:15:56.464 { 00:15:56.464 "method": "bdev_nvme_set_options", 00:15:56.464 "params": { 00:15:56.464 "action_on_timeout": "none", 00:15:56.464 "timeout_us": 0, 00:15:56.465 "timeout_admin_us": 0, 00:15:56.465 "keep_alive_timeout_ms": 10000, 00:15:56.465 "arbitration_burst": 0, 00:15:56.465 "low_priority_weight": 0, 00:15:56.465 "medium_priority_weight": 0, 00:15:56.465 "high_priority_weight": 0, 00:15:56.465 "nvme_adminq_poll_period_us": 10000, 00:15:56.465 "nvme_ioq_poll_period_us": 0, 00:15:56.465 "io_queue_requests": 0, 00:15:56.465 "delay_cmd_submit": true, 00:15:56.465 "transport_retry_count": 4, 00:15:56.465 "bdev_retry_count": 3, 00:15:56.465 "transport_ack_timeout": 0, 00:15:56.465 "ctrlr_loss_timeout_sec": 0, 00:15:56.465 "reconnect_delay_sec": 0, 00:15:56.465 "fast_io_fail_timeout_sec": 0, 00:15:56.465 "disable_auto_failback": false, 00:15:56.465 "generate_uuids": false, 00:15:56.465 "transport_tos": 0, 00:15:56.465 "nvme_error_stat": false, 00:15:56.465 "rdma_srq_size": 0, 00:15:56.465 "io_path_stat": false, 00:15:56.465 "allow_accel_sequence": false, 00:15:56.465 "rdma_max_cq_size": 0, 00:15:56.465 "rdma_cm_event_timeout_ms": 0, 00:15:56.465 "dhchap_digests": [ 00:15:56.465 "sha256", 00:15:56.465 "sha384", 00:15:56.465 "sha512" 00:15:56.465 ], 00:15:56.465 "dhchap_dhgroups": [ 00:15:56.465 "null", 00:15:56.465 "ffdhe2048", 00:15:56.465 "ffdhe3072", 00:15:56.465 "ffdhe4096", 00:15:56.465 "ffdhe6144", 00:15:56.465 "ffdhe8192" 00:15:56.465 ] 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "bdev_nvme_set_hotplug", 00:15:56.465 "params": { 00:15:56.465 "period_us": 100000, 00:15:56.465 "enable": false 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "bdev_malloc_create", 00:15:56.465 "params": { 00:15:56.465 "name": "malloc0", 00:15:56.465 "num_blocks": 8192, 00:15:56.465 "block_size": 4096, 00:15:56.465 "physical_block_size": 4096, 00:15:56.465 "uuid": "01a2d74f-360a-4db1-b274-12b1208d5701", 00:15:56.465 "optimal_io_boundary": 0, 00:15:56.465 "md_size": 0, 00:15:56.465 "dif_type": 0, 00:15:56.465 "dif_is_head_of_md": false, 00:15:56.465 "dif_pi_format": 0 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "bdev_wait_for_examine" 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "scsi", 00:15:56.465 "config": null 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "scheduler", 00:15:56.465 "config": [ 00:15:56.465 { 00:15:56.465 "method": "framework_set_scheduler", 00:15:56.465 "params": { 00:15:56.465 "name": "static" 00:15:56.465 } 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "vhost_scsi", 00:15:56.465 "config": [] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "vhost_blk", 00:15:56.465 "config": [] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "ublk", 00:15:56.465 "config": [ 00:15:56.465 { 00:15:56.465 "method": "ublk_create_target", 00:15:56.465 "params": { 00:15:56.465 "cpumask": "1" 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "ublk_start_disk", 00:15:56.465 "params": { 00:15:56.465 "bdev_name": "malloc0", 00:15:56.465 "ublk_id": 0, 00:15:56.465 "num_queues": 1, 00:15:56.465 "queue_depth": 128 00:15:56.465 } 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "nbd", 00:15:56.465 "config": [] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "nvmf", 00:15:56.465 "config": [ 00:15:56.465 { 00:15:56.465 "method": "nvmf_set_config", 00:15:56.465 "params": { 00:15:56.465 "discovery_filter": "match_any", 00:15:56.465 "admin_cmd_passthru": { 00:15:56.465 "identify_ctrlr": false 00:15:56.465 }, 00:15:56.465 "dhchap_digests": [ 00:15:56.465 "sha256", 00:15:56.465 "sha384", 00:15:56.465 "sha512" 00:15:56.465 ], 00:15:56.465 "dhchap_dhgroups": [ 00:15:56.465 "null", 00:15:56.465 "ffdhe2048", 00:15:56.465 "ffdhe3072", 00:15:56.465 "ffdhe4096", 00:15:56.465 "ffdhe6144", 00:15:56.465 "ffdhe8192" 00:15:56.465 ] 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "nvmf_set_max_subsystems", 00:15:56.465 "params": { 00:15:56.465 "max_subsystems": 1024 00:15:56.465 } 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "method": "nvmf_set_crdt", 00:15:56.465 "params": { 00:15:56.465 "crdt1": 0, 00:15:56.465 "crdt2": 0, 00:15:56.465 "crdt3": 0 00:15:56.465 } 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 }, 00:15:56.465 { 00:15:56.465 "subsystem": "iscsi", 00:15:56.465 "config": [ 00:15:56.465 { 00:15:56.465 "method": "iscsi_set_options", 00:15:56.465 "params": { 00:15:56.465 "node_base": "iqn.2016-06.io.spdk", 00:15:56.465 "max_sessions": 128, 00:15:56.465 "max_connections_per_session": 2, 00:15:56.465 "max_queue_depth": 64, 00:15:56.465 "default_time2wait": 2, 00:15:56.465 "default_time2retain": 20, 00:15:56.465 "first_burst_length": 8192, 00:15:56.465 "immediate_data": true, 00:15:56.465 "allow_duplicated_isid": false, 00:15:56.465 "error_recovery_level": 0, 00:15:56.465 "nop_timeout": 60, 00:15:56.465 "nop_in_interval": 30, 00:15:56.465 "disable_chap": false, 00:15:56.465 "require_chap": false, 00:15:56.465 "mutual_chap": false, 00:15:56.465 "chap_group": 0, 00:15:56.465 "max_large_datain_per_connection": 64, 00:15:56.465 "max_r2t_per_connection": 4, 00:15:56.465 "pdu_pool_size": 36864, 00:15:56.465 "immediate_data_pool_size": 16384, 00:15:56.465 "data_out_pool_size": 2048 00:15:56.465 } 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 } 00:15:56.465 ] 00:15:56.465 }' 00:15:56.465 [2024-11-29 18:29:16.236311] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:56.465 [2024-11-29 18:29:16.236467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85013 ] 00:15:56.726 [2024-11-29 18:29:16.397327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.726 [2024-11-29 18:29:16.426297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.987 [2024-11-29 18:29:16.791474] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.987 [2024-11-29 18:29:16.791811] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.987 [2024-11-29 18:29:16.799607] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:56.987 [2024-11-29 18:29:16.799702] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:56.987 [2024-11-29 18:29:16.799715] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:56.987 [2024-11-29 18:29:16.799725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:56.987 [2024-11-29 18:29:16.808584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:56.987 [2024-11-29 18:29:16.808613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:56.987 [2024-11-29 18:29:16.815494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:56.987 [2024-11-29 18:29:16.815618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:56.987 [2024-11-29 18:29:16.832481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 85013 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85013 ']' 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85013 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.249 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85013 00:15:57.510 killing process with pid 85013 00:15:57.510 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.510 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.510 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85013' 00:15:57.510 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85013 00:15:57.510 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85013 00:15:57.771 [2024-11-29 18:29:17.462697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:57.771 [2024-11-29 18:29:17.499496] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:57.771 [2024-11-29 18:29:17.499657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:57.771 [2024-11-29 18:29:17.509476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:57.771 [2024-11-29 18:29:17.509566] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:57.771 [2024-11-29 18:29:17.509577] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:57.771 [2024-11-29 18:29:17.509606] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:57.771 [2024-11-29 18:29:17.509769] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:58.344 18:29:17 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:58.344 00:15:58.344 real 0m3.951s 00:15:58.344 user 0m2.681s 00:15:58.344 sys 0m1.941s 00:15:58.344 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.344 18:29:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:58.344 ************************************ 00:15:58.344 END TEST test_save_ublk_config 00:15:58.344 ************************************ 00:15:58.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.344 18:29:18 ublk -- ublk/ublk.sh@139 -- # spdk_pid=85068 00:15:58.344 18:29:18 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.344 18:29:18 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:58.344 18:29:18 ublk -- ublk/ublk.sh@141 -- # waitforlisten 85068 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@835 -- # '[' -z 85068 ']' 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.344 18:29:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.344 [2024-11-29 18:29:18.107360] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:58.344 [2024-11-29 18:29:18.107515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85068 ] 00:15:58.604 [2024-11-29 18:29:18.265801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:58.604 [2024-11-29 18:29:18.286883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.604 [2024-11-29 18:29:18.286924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.178 18:29:18 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.178 18:29:18 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:59.178 18:29:18 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:59.178 18:29:18 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.179 18:29:18 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.179 18:29:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 ************************************ 00:15:59.179 START TEST test_create_ublk 00:15:59.179 ************************************ 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:59.179 18:29:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 [2024-11-29 18:29:18.936487] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:59.179 [2024-11-29 18:29:18.938068] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.179 18:29:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:59.179 18:29:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.179 18:29:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.179 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:59.179 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 [2024-11-29 18:29:19.021630] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:59.179 [2024-11-29 18:29:19.022085] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:59.179 [2024-11-29 18:29:19.022121] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:59.179 [2024-11-29 18:29:19.022132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:59.179 [2024-11-29 18:29:19.033804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:59.179 [2024-11-29 18:29:19.033840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:59.179 [2024-11-29 18:29:19.040504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:59.179 [2024-11-29 18:29:19.041212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:59.179 [2024-11-29 18:29:19.060523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.179 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:59.179 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:59.179 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.179 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.441 18:29:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:59.441 { 00:15:59.441 "ublk_device": "/dev/ublkb0", 00:15:59.441 "id": 0, 00:15:59.441 "queue_depth": 512, 00:15:59.441 "num_queues": 4, 00:15:59.441 "bdev_name": "Malloc0" 00:15:59.441 } 00:15:59.441 ]' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:59.441 18:29:19 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:59.703 fio: verification read phase will never start because write phase uses all of runtime 00:15:59.703 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:59.703 fio-3.35 00:15:59.703 Starting 1 process 00:16:09.682 00:16:09.682 fio_test: (groupid=0, jobs=1): err= 0: pid=85108: Fri Nov 29 18:29:29 2024 00:16:09.682 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(461MiB/10001msec); 0 zone resets 00:16:09.682 clat (usec): min=50, max=7823, avg=83.92, stdev=189.84 00:16:09.682 lat (usec): min=51, max=7846, avg=84.37, stdev=190.02 00:16:09.682 clat percentiles (usec): 00:16:09.682 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:16:09.682 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 00:16:09.682 | 70.00th=[ 73], 80.00th=[ 76], 90.00th=[ 81], 95.00th=[ 87], 00:16:09.682 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 3785], 99.95th=[ 4047], 00:16:09.682 | 99.99th=[ 4293] 00:16:09.682 bw ( KiB/s): min= 7752, max=57104, per=99.15%, avg=46808.84, stdev=17450.44, samples=19 00:16:09.682 iops : min= 1938, max=14276, avg=11702.21, stdev=4362.61, samples=19 00:16:09.682 lat (usec) : 100=96.74%, 250=1.44%, 500=1.45%, 750=0.01%, 1000=0.01% 00:16:09.682 lat (msec) : 2=0.07%, 4=0.23%, 10=0.06% 00:16:09.682 cpu : usr=2.21%, sys=10.90%, ctx=118030, majf=0, minf=797 00:16:09.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.682 issued rwts: total=0,118030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.682 00:16:09.682 Run status group 0 (all jobs): 00:16:09.682 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=461MiB (483MB), run=10001-10001msec 00:16:09.682 00:16:09.682 Disk stats (read/write): 00:16:09.682 ublkb0: ios=0/116604, merge=0/0, ticks=0/8243, in_queue=8244, util=99.08% 00:16:09.682 18:29:29 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 [2024-11-29 18:29:29.475438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:09.682 [2024-11-29 18:29:29.511472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:09.682 [2024-11-29 18:29:29.512171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:09.682 [2024-11-29 18:29:29.519480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:09.682 [2024-11-29 18:29:29.519721] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:09.682 [2024-11-29 18:29:29.519734] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.682 18:29:29 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 [2024-11-29 18:29:29.535549] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:09.682 request: 00:16:09.682 { 00:16:09.682 "ublk_id": 0, 00:16:09.682 "method": "ublk_stop_disk", 00:16:09.682 "req_id": 1 00:16:09.682 } 00:16:09.682 Got JSON-RPC error response 00:16:09.682 response: 00:16:09.682 { 00:16:09.682 "code": -19, 00:16:09.682 "message": "No such device" 00:16:09.682 } 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.682 18:29:29 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 [2024-11-29 18:29:29.551528] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.682 [2024-11-29 18:29:29.552753] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:09.682 [2024-11-29 18:29:29.552786] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.682 18:29:29 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.682 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.941 18:29:29 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:09.941 ************************************ 00:16:09.941 END TEST test_create_ublk 00:16:09.941 ************************************ 00:16:09.941 18:29:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:09.941 00:16:09.941 real 0m10.780s 00:16:09.941 user 0m0.510s 00:16:09.941 sys 0m1.181s 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 18:29:29 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:09.941 18:29:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.941 18:29:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.941 18:29:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 ************************************ 00:16:09.941 START TEST test_create_multi_ublk 00:16:09.941 ************************************ 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 [2024-11-29 18:29:29.754469] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:09.941 [2024-11-29 18:29:29.755320] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.941 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.941 [2024-11-29 18:29:29.826570] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:09.941 [2024-11-29 18:29:29.826868] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:09.941 [2024-11-29 18:29:29.826881] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:09.941 [2024-11-29 18:29:29.826886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.200 [2024-11-29 18:29:29.850476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.200 [2024-11-29 18:29:29.850493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.200 [2024-11-29 18:29:29.862482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.200 [2024-11-29 18:29:29.862964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:10.200 [2024-11-29 18:29:29.898483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.200 18:29:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.200 [2024-11-29 18:29:29.982574] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:10.200 [2024-11-29 18:29:29.982863] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:10.200 [2024-11-29 18:29:29.982874] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:10.200 [2024-11-29 18:29:29.982880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.200 [2024-11-29 18:29:29.994495] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.200 [2024-11-29 18:29:29.994515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.200 [2024-11-29 18:29:30.006480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.200 [2024-11-29 18:29:30.006956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:10.200 [2024-11-29 18:29:30.031491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.200 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.458 [2024-11-29 18:29:30.114568] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:10.458 [2024-11-29 18:29:30.114858] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:10.458 [2024-11-29 18:29:30.114870] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:10.458 [2024-11-29 18:29:30.114875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.458 [2024-11-29 18:29:30.126495] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.458 [2024-11-29 18:29:30.126512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.458 [2024-11-29 18:29:30.138486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.458 [2024-11-29 18:29:30.138973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:10.458 [2024-11-29 18:29:30.151503] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.458 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 [2024-11-29 18:29:30.234571] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:10.459 [2024-11-29 18:29:30.234865] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:10.459 [2024-11-29 18:29:30.234878] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:10.459 [2024-11-29 18:29:30.234884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.459 [2024-11-29 18:29:30.246483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.459 [2024-11-29 18:29:30.246503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.459 [2024-11-29 18:29:30.258486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.459 [2024-11-29 18:29:30.258964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:10.459 [2024-11-29 18:29:30.271486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:10.459 { 00:16:10.459 "ublk_device": "/dev/ublkb0", 00:16:10.459 "id": 0, 00:16:10.459 "queue_depth": 512, 00:16:10.459 "num_queues": 4, 00:16:10.459 "bdev_name": "Malloc0" 00:16:10.459 }, 00:16:10.459 { 00:16:10.459 "ublk_device": "/dev/ublkb1", 00:16:10.459 "id": 1, 00:16:10.459 "queue_depth": 512, 00:16:10.459 "num_queues": 4, 00:16:10.459 "bdev_name": "Malloc1" 00:16:10.459 }, 00:16:10.459 { 00:16:10.459 "ublk_device": "/dev/ublkb2", 00:16:10.459 "id": 2, 00:16:10.459 "queue_depth": 512, 00:16:10.459 "num_queues": 4, 00:16:10.459 "bdev_name": "Malloc2" 00:16:10.459 }, 00:16:10.459 { 00:16:10.459 "ublk_device": "/dev/ublkb3", 00:16:10.459 "id": 3, 00:16:10.459 "queue_depth": 512, 00:16:10.459 "num_queues": 4, 00:16:10.459 "bdev_name": "Malloc3" 00:16:10.459 } 00:16:10.459 ]' 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:10.459 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:10.717 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.976 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.234 18:29:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.234 [2024-11-29 18:29:30.930548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.234 [2024-11-29 18:29:30.978522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.234 [2024-11-29 18:29:30.979379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.234 [2024-11-29 18:29:30.990478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.234 [2024-11-29 18:29:30.990728] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:11.234 [2024-11-29 18:29:30.990739] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.234 [2024-11-29 18:29:31.014529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.234 [2024-11-29 18:29:31.057851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.234 [2024-11-29 18:29:31.063039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.234 [2024-11-29 18:29:31.074491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.234 [2024-11-29 18:29:31.074733] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:11.234 [2024-11-29 18:29:31.074744] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.234 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.234 [2024-11-29 18:29:31.098531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.234 [2024-11-29 18:29:31.133905] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.234 [2024-11-29 18:29:31.135032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.492 [2024-11-29 18:29:31.145485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.492 [2024-11-29 18:29:31.145723] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:11.492 [2024-11-29 18:29:31.145733] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.492 [2024-11-29 18:29:31.169524] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.492 [2024-11-29 18:29:31.211504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.492 [2024-11-29 18:29:31.212180] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.492 [2024-11-29 18:29:31.221474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.492 [2024-11-29 18:29:31.221732] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:11.492 [2024-11-29 18:29:31.221743] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.492 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:11.493 [2024-11-29 18:29:31.369541] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:11.493 [2024-11-29 18:29:31.370719] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:11.493 [2024-11-29 18:29:31.370748] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:11.493 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:11.493 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.493 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:11.493 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.493 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:11.751 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:12.009 ************************************ 00:16:12.009 END TEST test_create_multi_ublk 00:16:12.009 ************************************ 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:12.009 00:16:12.009 real 0m1.965s 00:16:12.009 user 0m0.767s 00:16:12.009 sys 0m0.131s 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.009 18:29:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.009 18:29:31 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:12.009 18:29:31 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:12.009 18:29:31 ublk -- ublk/ublk.sh@130 -- # killprocess 85068 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@954 -- # '[' -z 85068 ']' 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@958 -- # kill -0 85068 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@959 -- # uname 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85068 00:16:12.009 killing process with pid 85068 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85068' 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@973 -- # kill 85068 00:16:12.009 18:29:31 ublk -- common/autotest_common.sh@978 -- # wait 85068 00:16:12.268 [2024-11-29 18:29:31.924045] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.268 [2024-11-29 18:29:31.924104] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:12.529 00:16:12.529 real 0m18.352s 00:16:12.529 user 0m27.451s 00:16:12.529 sys 0m8.146s 00:16:12.529 18:29:32 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.529 18:29:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.529 ************************************ 00:16:12.529 END TEST ublk 00:16:12.529 ************************************ 00:16:12.529 18:29:32 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:12.529 18:29:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.529 18:29:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.529 18:29:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.529 ************************************ 00:16:12.529 START TEST ublk_recovery 00:16:12.529 ************************************ 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:12.529 * Looking for test storage... 00:16:12.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.529 18:29:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.529 --rc genhtml_branch_coverage=1 00:16:12.529 --rc genhtml_function_coverage=1 00:16:12.529 --rc genhtml_legend=1 00:16:12.529 --rc geninfo_all_blocks=1 00:16:12.529 --rc geninfo_unexecuted_blocks=1 00:16:12.529 00:16:12.529 ' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.529 --rc genhtml_branch_coverage=1 00:16:12.529 --rc genhtml_function_coverage=1 00:16:12.529 --rc genhtml_legend=1 00:16:12.529 --rc geninfo_all_blocks=1 00:16:12.529 --rc geninfo_unexecuted_blocks=1 00:16:12.529 00:16:12.529 ' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.529 --rc genhtml_branch_coverage=1 00:16:12.529 --rc genhtml_function_coverage=1 00:16:12.529 --rc genhtml_legend=1 00:16:12.529 --rc geninfo_all_blocks=1 00:16:12.529 --rc geninfo_unexecuted_blocks=1 00:16:12.529 00:16:12.529 ' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.529 --rc genhtml_branch_coverage=1 00:16:12.529 --rc genhtml_function_coverage=1 00:16:12.529 --rc genhtml_legend=1 00:16:12.529 --rc geninfo_all_blocks=1 00:16:12.529 --rc geninfo_unexecuted_blocks=1 00:16:12.529 00:16:12.529 ' 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:12.529 18:29:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85431 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85431 00:16:12.529 18:29:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85431 ']' 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.529 18:29:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.530 18:29:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.530 18:29:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.530 18:29:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.788 [2024-11-29 18:29:32.463149] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:12.788 [2024-11-29 18:29:32.463509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85431 ] 00:16:12.788 [2024-11-29 18:29:32.616889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.788 [2024-11-29 18:29:32.634797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.788 [2024-11-29 18:29:32.634842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:13.721 18:29:33 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.721 [2024-11-29 18:29:33.295470] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:13.721 [2024-11-29 18:29:33.296383] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.721 18:29:33 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.721 malloc0 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.721 18:29:33 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.721 [2024-11-29 18:29:33.327575] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:13.721 [2024-11-29 18:29:33.327665] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:13.721 [2024-11-29 18:29:33.327671] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:13.721 [2024-11-29 18:29:33.327678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:13.721 [2024-11-29 18:29:33.336542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:13.721 [2024-11-29 18:29:33.336565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:13.721 [2024-11-29 18:29:33.343485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:13.721 [2024-11-29 18:29:33.343597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:13.721 [2024-11-29 18:29:33.358470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:13.721 1 00:16:13.721 18:29:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.721 18:29:33 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:14.655 18:29:34 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85459 00:16:14.655 18:29:34 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:14.655 18:29:34 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:14.655 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.655 fio-3.35 00:16:14.655 Starting 1 process 00:16:19.994 18:29:39 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85431 00:16:19.994 18:29:39 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:25.365 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85431 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:25.365 18:29:44 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85578 00:16:25.365 18:29:44 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.365 18:29:44 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85578 00:16:25.365 18:29:44 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85578 ']' 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.365 18:29:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.365 [2024-11-29 18:29:44.437641] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:25.365 [2024-11-29 18:29:44.437887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85578 ] 00:16:25.365 [2024-11-29 18:29:44.584676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:25.365 [2024-11-29 18:29:44.604117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.365 [2024-11-29 18:29:44.604199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:25.623 18:29:45 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.623 [2024-11-29 18:29:45.293475] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.623 [2024-11-29 18:29:45.294420] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.623 18:29:45 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.623 malloc0 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.623 18:29:45 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.623 [2024-11-29 18:29:45.325603] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:25.623 [2024-11-29 18:29:45.325636] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:25.623 [2024-11-29 18:29:45.325642] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:25.623 [2024-11-29 18:29:45.333515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:25.623 [2024-11-29 18:29:45.333532] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:25.623 1 00:16:25.623 18:29:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.623 18:29:45 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85459 00:16:26.557 [2024-11-29 18:29:46.333554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:26.557 [2024-11-29 18:29:46.341476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:26.557 [2024-11-29 18:29:46.341496] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:27.492 [2024-11-29 18:29:47.341515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:27.492 [2024-11-29 18:29:47.349475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:27.492 [2024-11-29 18:29:47.349488] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:28.868 [2024-11-29 18:29:48.349514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:28.868 [2024-11-29 18:29:48.363470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:28.868 [2024-11-29 18:29:48.363485] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:28.868 [2024-11-29 18:29:48.363491] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:28.868 [2024-11-29 18:29:48.363559] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:50.792 [2024-11-29 18:30:09.735472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:50.792 [2024-11-29 18:30:09.743081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:50.792 [2024-11-29 18:30:09.750692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:50.792 [2024-11-29 18:30:09.750764] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:17.329 00:17:17.329 fio_test: (groupid=0, jobs=1): err= 0: pid=85466: Fri Nov 29 18:30:34 2024 00:17:17.329 read: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(3530MiB/60002msec) 00:17:17.329 slat (nsec): min=964, max=228548, avg=4930.31, stdev=1402.25 00:17:17.329 clat (usec): min=827, max=30392k, avg=4343.03, stdev=263599.14 00:17:17.329 lat (usec): min=831, max=30392k, avg=4347.96, stdev=263599.14 00:17:17.329 clat percentiles (usec): 00:17:17.329 | 1.00th=[ 1729], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:17:17.329 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1926], 60.00th=[ 1942], 00:17:17.329 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2024], 95.00th=[ 2999], 00:17:17.329 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[12387], 00:17:17.329 | 99.99th=[13566] 00:17:17.329 bw ( KiB/s): min=41872, max=127368, per=100.00%, avg=120623.81, stdev=15217.01, samples=59 00:17:17.329 iops : min=10468, max=31842, avg=30155.95, stdev=3804.25, samples=59 00:17:17.329 write: IOPS=15.0k, BW=58.8MiB/s (61.6MB/s)(3525MiB/60002msec); 0 zone resets 00:17:17.329 slat (nsec): min=981, max=128065, avg=4959.95, stdev=1340.33 00:17:17.329 clat (usec): min=689, max=30392k, avg=4150.75, stdev=247776.77 00:17:17.329 lat (usec): min=694, max=30392k, avg=4155.71, stdev=247776.77 00:17:17.329 clat percentiles (usec): 00:17:17.329 | 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:17:17.329 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:17:17.329 | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2835], 00:17:17.329 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 8717], 00:17:17.329 | 99.99th=[13566] 00:17:17.329 bw ( KiB/s): min=41200, max=126264, per=100.00%, avg=120453.80, stdev=15399.71, samples=59 00:17:17.329 iops : min=10300, max=31566, avg=30113.44, stdev=3849.93, samples=59 00:17:17.329 lat (usec) : 750=0.01%, 1000=0.01% 00:17:17.329 lat (msec) : 2=61.06%, 4=36.37%, 10=2.52%, 20=0.04%, >=2000=0.01% 00:17:17.329 cpu : usr=3.47%, sys=15.14%, ctx=60047, majf=0, minf=13 00:17:17.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:17.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.329 issued rwts: total=903642,902434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.329 00:17:17.329 Run status group 0 (all jobs): 00:17:17.329 READ: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=3530MiB (3701MB), run=60002-60002msec 00:17:17.329 WRITE: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=3525MiB (3696MB), run=60002-60002msec 00:17:17.329 00:17:17.329 Disk stats (read/write): 00:17:17.329 ublkb1: ios=900569/899218, merge=0/0, ticks=3872103/3619430, in_queue=7491534, util=99.92% 00:17:17.329 18:30:34 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.329 [2024-11-29 18:30:34.624460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:17.329 [2024-11-29 18:30:34.666580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:17.329 [2024-11-29 18:30:34.666728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:17.329 [2024-11-29 18:30:34.672487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:17.329 [2024-11-29 18:30:34.672579] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:17.329 [2024-11-29 18:30:34.672595] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.329 18:30:34 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.329 [2024-11-29 18:30:34.688559] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:17.329 [2024-11-29 18:30:34.690251] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:17.329 [2024-11-29 18:30:34.690287] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.329 18:30:34 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:17.329 18:30:34 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:17.329 18:30:34 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85578 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85578 ']' 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85578 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85578 00:17:17.329 killing process with pid 85578 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85578' 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85578 00:17:17.329 18:30:34 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85578 00:17:17.329 [2024-11-29 18:30:34.889948] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:17.329 [2024-11-29 18:30:34.889990] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:17.329 ************************************ 00:17:17.329 END TEST ublk_recovery 00:17:17.329 ************************************ 00:17:17.329 00:17:17.329 real 1m2.918s 00:17:17.329 user 1m44.793s 00:17:17.329 sys 0m21.794s 00:17:17.329 18:30:35 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.329 18:30:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.329 18:30:35 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:17.329 18:30:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:17.329 18:30:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.329 18:30:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.329 18:30:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:17.329 18:30:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:17.330 18:30:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:17.330 18:30:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:17.330 18:30:35 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:17.330 18:30:35 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:17.330 18:30:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.330 18:30:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.330 18:30:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.330 ************************************ 00:17:17.330 START TEST ftl 00:17:17.330 ************************************ 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:17.330 * Looking for test storage... 00:17:17.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.330 18:30:35 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.330 18:30:35 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.330 18:30:35 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.330 18:30:35 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.330 18:30:35 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.330 18:30:35 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:17.330 18:30:35 ftl -- scripts/common.sh@345 -- # : 1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.330 18:30:35 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.330 18:30:35 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@353 -- # local d=1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.330 18:30:35 ftl -- scripts/common.sh@355 -- # echo 1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.330 18:30:35 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@353 -- # local d=2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.330 18:30:35 ftl -- scripts/common.sh@355 -- # echo 2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.330 18:30:35 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.330 18:30:35 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.330 18:30:35 ftl -- scripts/common.sh@368 -- # return 0 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.330 --rc genhtml_branch_coverage=1 00:17:17.330 --rc genhtml_function_coverage=1 00:17:17.330 --rc genhtml_legend=1 00:17:17.330 --rc geninfo_all_blocks=1 00:17:17.330 --rc geninfo_unexecuted_blocks=1 00:17:17.330 00:17:17.330 ' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.330 --rc genhtml_branch_coverage=1 00:17:17.330 --rc genhtml_function_coverage=1 00:17:17.330 --rc genhtml_legend=1 00:17:17.330 --rc geninfo_all_blocks=1 00:17:17.330 --rc geninfo_unexecuted_blocks=1 00:17:17.330 00:17:17.330 ' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.330 --rc genhtml_branch_coverage=1 00:17:17.330 --rc genhtml_function_coverage=1 00:17:17.330 --rc genhtml_legend=1 00:17:17.330 --rc geninfo_all_blocks=1 00:17:17.330 --rc geninfo_unexecuted_blocks=1 00:17:17.330 00:17:17.330 ' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.330 --rc genhtml_branch_coverage=1 00:17:17.330 --rc genhtml_function_coverage=1 00:17:17.330 --rc genhtml_legend=1 00:17:17.330 --rc geninfo_all_blocks=1 00:17:17.330 --rc geninfo_unexecuted_blocks=1 00:17:17.330 00:17:17.330 ' 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:17.330 18:30:35 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:17.330 18:30:35 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:17.330 18:30:35 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:17.330 18:30:35 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:17.330 18:30:35 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:17.330 18:30:35 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.330 18:30:35 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:17.330 18:30:35 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:17.330 18:30:35 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:17.330 18:30:35 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:17.330 18:30:35 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:17.330 18:30:35 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:17.330 18:30:35 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:17.330 18:30:35 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:17.330 18:30:35 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:17.330 18:30:35 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:17.330 18:30:35 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:17.330 18:30:35 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:17.330 18:30:35 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:17.330 18:30:35 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:17.330 18:30:35 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:17.330 18:30:35 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:17.330 18:30:35 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.330 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.330 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.330 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.330 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86374 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:17.330 18:30:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86374 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 86374 ']' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.330 18:30:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:17.330 [2024-11-29 18:30:35.978845] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:17:17.330 [2024-11-29 18:30:35.978999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86374 ] 00:17:17.330 [2024-11-29 18:30:36.138793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.330 [2024-11-29 18:30:36.160775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.330 18:30:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.330 18:30:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:17.330 18:30:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:17.330 18:30:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:17.591 18:30:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:17.591 18:30:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:17.852 18:30:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:17.852 18:30:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:17.852 18:30:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@50 -- # break 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:18.113 18:30:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:18.374 18:30:38 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:18.374 18:30:38 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:18.374 18:30:38 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:18.374 18:30:38 ftl -- ftl/ftl.sh@63 -- # break 00:17:18.374 18:30:38 ftl -- ftl/ftl.sh@66 -- # killprocess 86374 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 86374 ']' 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@958 -- # kill -0 86374 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@959 -- # uname 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86374 00:17:18.374 killing process with pid 86374 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86374' 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@973 -- # kill 86374 00:17:18.374 18:30:38 ftl -- common/autotest_common.sh@978 -- # wait 86374 00:17:18.637 18:30:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:18.637 18:30:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:18.637 18:30:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:18.637 18:30:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.637 18:30:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:18.637 ************************************ 00:17:18.637 START TEST ftl_fio_basic 00:17:18.637 ************************************ 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:18.637 * Looking for test storage... 00:17:18.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.637 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.638 --rc genhtml_branch_coverage=1 00:17:18.638 --rc genhtml_function_coverage=1 00:17:18.638 --rc genhtml_legend=1 00:17:18.638 --rc geninfo_all_blocks=1 00:17:18.638 --rc geninfo_unexecuted_blocks=1 00:17:18.638 00:17:18.638 ' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.638 --rc genhtml_branch_coverage=1 00:17:18.638 --rc genhtml_function_coverage=1 00:17:18.638 --rc genhtml_legend=1 00:17:18.638 --rc geninfo_all_blocks=1 00:17:18.638 --rc geninfo_unexecuted_blocks=1 00:17:18.638 00:17:18.638 ' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.638 --rc genhtml_branch_coverage=1 00:17:18.638 --rc genhtml_function_coverage=1 00:17:18.638 --rc genhtml_legend=1 00:17:18.638 --rc geninfo_all_blocks=1 00:17:18.638 --rc geninfo_unexecuted_blocks=1 00:17:18.638 00:17:18.638 ' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:18.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.638 --rc genhtml_branch_coverage=1 00:17:18.638 --rc genhtml_function_coverage=1 00:17:18.638 --rc genhtml_legend=1 00:17:18.638 --rc geninfo_all_blocks=1 00:17:18.638 --rc geninfo_unexecuted_blocks=1 00:17:18.638 00:17:18.638 ' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86491 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86491 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86491 ']' 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.638 18:30:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:18.900 [2024-11-29 18:30:38.631600] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:17:18.900 [2024-11-29 18:30:38.632002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86491 ] 00:17:18.900 [2024-11-29 18:30:38.792517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.161 [2024-11-29 18:30:38.818846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.161 [2024-11-29 18:30:38.819095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.161 [2024-11-29 18:30:38.819168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:19.735 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:19.996 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:19.996 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:19.996 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:19.996 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:19.997 { 00:17:19.997 "name": "nvme0n1", 00:17:19.997 "aliases": [ 00:17:19.997 "f672462c-ca67-4def-8432-6d46b8473df9" 00:17:19.997 ], 00:17:19.997 "product_name": "NVMe disk", 00:17:19.997 "block_size": 4096, 00:17:19.997 "num_blocks": 1310720, 00:17:19.997 "uuid": "f672462c-ca67-4def-8432-6d46b8473df9", 00:17:19.997 "numa_id": -1, 00:17:19.997 "assigned_rate_limits": { 00:17:19.997 "rw_ios_per_sec": 0, 00:17:19.997 "rw_mbytes_per_sec": 0, 00:17:19.997 "r_mbytes_per_sec": 0, 00:17:19.997 "w_mbytes_per_sec": 0 00:17:19.997 }, 00:17:19.997 "claimed": false, 00:17:19.997 "zoned": false, 00:17:19.997 "supported_io_types": { 00:17:19.997 "read": true, 00:17:19.997 "write": true, 00:17:19.997 "unmap": true, 00:17:19.997 "flush": true, 00:17:19.997 "reset": true, 00:17:19.997 "nvme_admin": true, 00:17:19.997 "nvme_io": true, 00:17:19.997 "nvme_io_md": false, 00:17:19.997 "write_zeroes": true, 00:17:19.997 "zcopy": false, 00:17:19.997 "get_zone_info": false, 00:17:19.997 "zone_management": false, 00:17:19.997 "zone_append": false, 00:17:19.997 "compare": true, 00:17:19.997 "compare_and_write": false, 00:17:19.997 "abort": true, 00:17:19.997 "seek_hole": false, 00:17:19.997 "seek_data": false, 00:17:19.997 "copy": true, 00:17:19.997 "nvme_iov_md": false 00:17:19.997 }, 00:17:19.997 "driver_specific": { 00:17:19.997 "nvme": [ 00:17:19.997 { 00:17:19.997 "pci_address": "0000:00:11.0", 00:17:19.997 "trid": { 00:17:19.997 "trtype": "PCIe", 00:17:19.997 "traddr": "0000:00:11.0" 00:17:19.997 }, 00:17:19.997 "ctrlr_data": { 00:17:19.997 "cntlid": 0, 00:17:19.997 "vendor_id": "0x1b36", 00:17:19.997 "model_number": "QEMU NVMe Ctrl", 00:17:19.997 "serial_number": "12341", 00:17:19.997 "firmware_revision": "8.0.0", 00:17:19.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:19.997 "oacs": { 00:17:19.997 "security": 0, 00:17:19.997 "format": 1, 00:17:19.997 "firmware": 0, 00:17:19.997 "ns_manage": 1 00:17:19.997 }, 00:17:19.997 "multi_ctrlr": false, 00:17:19.997 "ana_reporting": false 00:17:19.997 }, 00:17:19.997 "vs": { 00:17:19.997 "nvme_version": "1.4" 00:17:19.997 }, 00:17:19.997 "ns_data": { 00:17:19.997 "id": 1, 00:17:19.997 "can_share": false 00:17:19.997 } 00:17:19.997 } 00:17:19.997 ], 00:17:19.997 "mp_policy": "active_passive" 00:17:19.997 } 00:17:19.997 } 00:17:19.997 ]' 00:17:19.997 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:20.257 18:30:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:20.257 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:20.257 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:20.518 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fb29b290-68e1-4587-8076-d024f3c165a1 00:17:20.518 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fb29b290-68e1-4587-8076-d024f3c165a1 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:20.778 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.037 { 00:17:21.037 "name": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:21.037 "aliases": [ 00:17:21.037 "lvs/nvme0n1p0" 00:17:21.037 ], 00:17:21.037 "product_name": "Logical Volume", 00:17:21.037 "block_size": 4096, 00:17:21.037 "num_blocks": 26476544, 00:17:21.037 "uuid": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:21.037 "assigned_rate_limits": { 00:17:21.037 "rw_ios_per_sec": 0, 00:17:21.037 "rw_mbytes_per_sec": 0, 00:17:21.037 "r_mbytes_per_sec": 0, 00:17:21.037 "w_mbytes_per_sec": 0 00:17:21.037 }, 00:17:21.037 "claimed": false, 00:17:21.037 "zoned": false, 00:17:21.037 "supported_io_types": { 00:17:21.037 "read": true, 00:17:21.037 "write": true, 00:17:21.037 "unmap": true, 00:17:21.037 "flush": false, 00:17:21.037 "reset": true, 00:17:21.037 "nvme_admin": false, 00:17:21.037 "nvme_io": false, 00:17:21.037 "nvme_io_md": false, 00:17:21.037 "write_zeroes": true, 00:17:21.037 "zcopy": false, 00:17:21.037 "get_zone_info": false, 00:17:21.037 "zone_management": false, 00:17:21.037 "zone_append": false, 00:17:21.037 "compare": false, 00:17:21.037 "compare_and_write": false, 00:17:21.037 "abort": false, 00:17:21.037 "seek_hole": true, 00:17:21.037 "seek_data": true, 00:17:21.037 "copy": false, 00:17:21.037 "nvme_iov_md": false 00:17:21.037 }, 00:17:21.037 "driver_specific": { 00:17:21.037 "lvol": { 00:17:21.037 "lvol_store_uuid": "fb29b290-68e1-4587-8076-d024f3c165a1", 00:17:21.037 "base_bdev": "nvme0n1", 00:17:21.037 "thin_provision": true, 00:17:21.037 "num_allocated_clusters": 0, 00:17:21.037 "snapshot": false, 00:17:21.037 "clone": false, 00:17:21.037 "esnap_clone": false 00:17:21.037 } 00:17:21.037 } 00:17:21.037 } 00:17:21.037 ]' 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:21.037 18:30:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:21.296 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.555 { 00:17:21.555 "name": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:21.555 "aliases": [ 00:17:21.555 "lvs/nvme0n1p0" 00:17:21.555 ], 00:17:21.555 "product_name": "Logical Volume", 00:17:21.555 "block_size": 4096, 00:17:21.555 "num_blocks": 26476544, 00:17:21.555 "uuid": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:21.555 "assigned_rate_limits": { 00:17:21.555 "rw_ios_per_sec": 0, 00:17:21.555 "rw_mbytes_per_sec": 0, 00:17:21.555 "r_mbytes_per_sec": 0, 00:17:21.555 "w_mbytes_per_sec": 0 00:17:21.555 }, 00:17:21.555 "claimed": false, 00:17:21.555 "zoned": false, 00:17:21.555 "supported_io_types": { 00:17:21.555 "read": true, 00:17:21.555 "write": true, 00:17:21.555 "unmap": true, 00:17:21.555 "flush": false, 00:17:21.555 "reset": true, 00:17:21.555 "nvme_admin": false, 00:17:21.555 "nvme_io": false, 00:17:21.555 "nvme_io_md": false, 00:17:21.555 "write_zeroes": true, 00:17:21.555 "zcopy": false, 00:17:21.555 "get_zone_info": false, 00:17:21.555 "zone_management": false, 00:17:21.555 "zone_append": false, 00:17:21.555 "compare": false, 00:17:21.555 "compare_and_write": false, 00:17:21.555 "abort": false, 00:17:21.555 "seek_hole": true, 00:17:21.555 "seek_data": true, 00:17:21.555 "copy": false, 00:17:21.555 "nvme_iov_md": false 00:17:21.555 }, 00:17:21.555 "driver_specific": { 00:17:21.555 "lvol": { 00:17:21.555 "lvol_store_uuid": "fb29b290-68e1-4587-8076-d024f3c165a1", 00:17:21.555 "base_bdev": "nvme0n1", 00:17:21.555 "thin_provision": true, 00:17:21.555 "num_allocated_clusters": 0, 00:17:21.555 "snapshot": false, 00:17:21.555 "clone": false, 00:17:21.555 "esnap_clone": false 00:17:21.555 } 00:17:21.555 } 00:17:21.555 } 00:17:21.555 ]' 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:21.555 18:30:41 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:21.814 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:21.814 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf 00:17:22.072 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:22.072 { 00:17:22.072 "name": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:22.072 "aliases": [ 00:17:22.072 "lvs/nvme0n1p0" 00:17:22.072 ], 00:17:22.072 "product_name": "Logical Volume", 00:17:22.072 "block_size": 4096, 00:17:22.072 "num_blocks": 26476544, 00:17:22.072 "uuid": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:22.072 "assigned_rate_limits": { 00:17:22.072 "rw_ios_per_sec": 0, 00:17:22.072 "rw_mbytes_per_sec": 0, 00:17:22.072 "r_mbytes_per_sec": 0, 00:17:22.072 "w_mbytes_per_sec": 0 00:17:22.072 }, 00:17:22.072 "claimed": false, 00:17:22.072 "zoned": false, 00:17:22.072 "supported_io_types": { 00:17:22.072 "read": true, 00:17:22.072 "write": true, 00:17:22.072 "unmap": true, 00:17:22.072 "flush": false, 00:17:22.072 "reset": true, 00:17:22.072 "nvme_admin": false, 00:17:22.072 "nvme_io": false, 00:17:22.072 "nvme_io_md": false, 00:17:22.072 "write_zeroes": true, 00:17:22.072 "zcopy": false, 00:17:22.072 "get_zone_info": false, 00:17:22.072 "zone_management": false, 00:17:22.072 "zone_append": false, 00:17:22.073 "compare": false, 00:17:22.073 "compare_and_write": false, 00:17:22.073 "abort": false, 00:17:22.073 "seek_hole": true, 00:17:22.073 "seek_data": true, 00:17:22.073 "copy": false, 00:17:22.073 "nvme_iov_md": false 00:17:22.073 }, 00:17:22.073 "driver_specific": { 00:17:22.073 "lvol": { 00:17:22.073 "lvol_store_uuid": "fb29b290-68e1-4587-8076-d024f3c165a1", 00:17:22.073 "base_bdev": "nvme0n1", 00:17:22.073 "thin_provision": true, 00:17:22.073 "num_allocated_clusters": 0, 00:17:22.073 "snapshot": false, 00:17:22.073 "clone": false, 00:17:22.073 "esnap_clone": false 00:17:22.073 } 00:17:22.073 } 00:17:22.073 } 00:17:22.073 ]' 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:22.073 18:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6b564208-5ee3-4ab5-90d7-f22fe8f11dbf -c nvc0n1p0 --l2p_dram_limit 60 00:17:22.332 [2024-11-29 18:30:41.979987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.332 [2024-11-29 18:30:41.980026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.332 [2024-11-29 18:30:41.980038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.332 [2024-11-29 18:30:41.980045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.332 [2024-11-29 18:30:41.980114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.332 [2024-11-29 18:30:41.980123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.332 [2024-11-29 18:30:41.980129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:22.332 [2024-11-29 18:30:41.980140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.332 [2024-11-29 18:30:41.980175] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.332 [2024-11-29 18:30:41.980384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.332 [2024-11-29 18:30:41.980396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.332 [2024-11-29 18:30:41.980403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.332 [2024-11-29 18:30:41.980409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:22.332 [2024-11-29 18:30:41.980416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.332 [2024-11-29 18:30:41.980445] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9b2cd339-7abf-4e83-9f94-676cad6f00bd 00:17:22.332 [2024-11-29 18:30:41.981488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.332 [2024-11-29 18:30:41.981523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:22.332 [2024-11-29 18:30:41.981534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:22.332 [2024-11-29 18:30:41.981541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.332 [2024-11-29 18:30:41.986713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.332 [2024-11-29 18:30:41.986822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.333 [2024-11-29 18:30:41.986839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.089 ms 00:17:22.333 [2024-11-29 18:30:41.986846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.986937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.986945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.333 [2024-11-29 18:30:41.986953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:22.333 [2024-11-29 18:30:41.986958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.987018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.987033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.333 [2024-11-29 18:30:41.987041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:22.333 [2024-11-29 18:30:41.987048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.987077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.333 [2024-11-29 18:30:41.988378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.988406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.333 [2024-11-29 18:30:41.988413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:17:22.333 [2024-11-29 18:30:41.988420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.988465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.988474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.333 [2024-11-29 18:30:41.988481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:22.333 [2024-11-29 18:30:41.988492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.988514] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:22.333 [2024-11-29 18:30:41.988627] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.333 [2024-11-29 18:30:41.988637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.333 [2024-11-29 18:30:41.988646] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.333 [2024-11-29 18:30:41.988655] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.333 [2024-11-29 18:30:41.988663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.333 [2024-11-29 18:30:41.988669] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:22.333 [2024-11-29 18:30:41.988676] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.333 [2024-11-29 18:30:41.988681] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.333 [2024-11-29 18:30:41.988688] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.333 [2024-11-29 18:30:41.988694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.988702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.333 [2024-11-29 18:30:41.988708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:22.333 [2024-11-29 18:30:41.988714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.988787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.333 [2024-11-29 18:30:41.988806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.333 [2024-11-29 18:30:41.988811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:22.333 [2024-11-29 18:30:41.988825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.333 [2024-11-29 18:30:41.988934] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.333 [2024-11-29 18:30:41.988947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.333 [2024-11-29 18:30:41.988953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.333 [2024-11-29 18:30:41.988961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.988967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.333 [2024-11-29 18:30:41.988973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.988978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:22.333 [2024-11-29 18:30:41.988989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.333 [2024-11-29 18:30:41.988994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.333 [2024-11-29 18:30:41.989006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.333 [2024-11-29 18:30:41.989014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:22.333 [2024-11-29 18:30:41.989019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.333 [2024-11-29 18:30:41.989027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.333 [2024-11-29 18:30:41.989043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:22.333 [2024-11-29 18:30:41.989050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.333 [2024-11-29 18:30:41.989063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.333 [2024-11-29 18:30:41.989081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.333 [2024-11-29 18:30:41.989101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.333 [2024-11-29 18:30:41.989119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.333 [2024-11-29 18:30:41.989140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.333 [2024-11-29 18:30:41.989158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.333 [2024-11-29 18:30:41.989170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.333 [2024-11-29 18:30:41.989179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:22.333 [2024-11-29 18:30:41.989184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.333 [2024-11-29 18:30:41.989192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.333 [2024-11-29 18:30:41.989198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:22.333 [2024-11-29 18:30:41.989206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.333 [2024-11-29 18:30:41.989219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:22.333 [2024-11-29 18:30:41.989226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.333 [2024-11-29 18:30:41.989239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.333 [2024-11-29 18:30:41.989249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.333 [2024-11-29 18:30:41.989264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.333 [2024-11-29 18:30:41.989270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.333 [2024-11-29 18:30:41.989277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.333 [2024-11-29 18:30:41.989283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.333 [2024-11-29 18:30:41.989290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.333 [2024-11-29 18:30:41.989296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.333 [2024-11-29 18:30:41.989305] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.333 [2024-11-29 18:30:41.989313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.333 [2024-11-29 18:30:41.989321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:22.333 [2024-11-29 18:30:41.989327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:22.333 [2024-11-29 18:30:41.989335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:22.333 [2024-11-29 18:30:41.989341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:22.333 [2024-11-29 18:30:41.989349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:22.333 [2024-11-29 18:30:41.989356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:22.333 [2024-11-29 18:30:41.989364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:22.333 [2024-11-29 18:30:41.989370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:22.333 [2024-11-29 18:30:41.989378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:22.333 [2024-11-29 18:30:41.989384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:22.334 [2024-11-29 18:30:41.989418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.334 [2024-11-29 18:30:41.989424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.334 [2024-11-29 18:30:41.989438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.334 [2024-11-29 18:30:41.989445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.334 [2024-11-29 18:30:41.989451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.334 [2024-11-29 18:30:41.989472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.334 [2024-11-29 18:30:41.989478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.334 [2024-11-29 18:30:41.989486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:17:22.334 [2024-11-29 18:30:41.989491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.334 [2024-11-29 18:30:41.989565] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:22.334 [2024-11-29 18:30:41.989578] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:24.867 [2024-11-29 18:30:44.497564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.497623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:24.867 [2024-11-29 18:30:44.497639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2507.988 ms 00:17:24.867 [2024-11-29 18:30:44.497648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.506191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.506368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.867 [2024-11-29 18:30:44.506387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.455 ms 00:17:24.867 [2024-11-29 18:30:44.506396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.506511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.506522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:24.867 [2024-11-29 18:30:44.506533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:24.867 [2024-11-29 18:30:44.506541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.526274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.526322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.867 [2024-11-29 18:30:44.526340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:17:24.867 [2024-11-29 18:30:44.526350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.526398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.526409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.867 [2024-11-29 18:30:44.526422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.867 [2024-11-29 18:30:44.526431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.526840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.526864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.867 [2024-11-29 18:30:44.526880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:24.867 [2024-11-29 18:30:44.526889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.527045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.527057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.867 [2024-11-29 18:30:44.527070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:24.867 [2024-11-29 18:30:44.527091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.532918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.533072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.867 [2024-11-29 18:30:44.533101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:17:24.867 [2024-11-29 18:30:44.533112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.541639] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:24.867 [2024-11-29 18:30:44.556519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.556561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:24.867 [2024-11-29 18:30:44.556572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.314 ms 00:17:24.867 [2024-11-29 18:30:44.556583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.606222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.606268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:24.867 [2024-11-29 18:30:44.606281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.603 ms 00:17:24.867 [2024-11-29 18:30:44.606293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.606492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.606506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:24.867 [2024-11-29 18:30:44.606515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:24.867 [2024-11-29 18:30:44.606523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.609407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.609441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:24.867 [2024-11-29 18:30:44.609469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:17:24.867 [2024-11-29 18:30:44.609481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.611858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.612016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:24.867 [2024-11-29 18:30:44.612033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:17:24.867 [2024-11-29 18:30:44.612043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.612359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.612373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:24.867 [2024-11-29 18:30:44.612382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:24.867 [2024-11-29 18:30:44.612392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.638442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.638488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:24.867 [2024-11-29 18:30:44.638498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.019 ms 00:17:24.867 [2024-11-29 18:30:44.638508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.642185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.642325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:24.867 [2024-11-29 18:30:44.642341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:17:24.867 [2024-11-29 18:30:44.642351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.645206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.645239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:24.867 [2024-11-29 18:30:44.645249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:17:24.867 [2024-11-29 18:30:44.645258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.648744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.648857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:24.867 [2024-11-29 18:30:44.648913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.449 ms 00:17:24.867 [2024-11-29 18:30:44.648941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.649003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.649173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:24.867 [2024-11-29 18:30:44.649186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:24.867 [2024-11-29 18:30:44.649196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.649279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.867 [2024-11-29 18:30:44.649292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:24.867 [2024-11-29 18:30:44.649300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:24.867 [2024-11-29 18:30:44.649309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.867 [2024-11-29 18:30:44.650333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2669.909 ms, result 0 00:17:24.867 { 00:17:24.867 "name": "ftl0", 00:17:24.867 "uuid": "9b2cd339-7abf-4e83-9f94-676cad6f00bd" 00:17:24.867 } 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.867 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.126 18:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:25.385 [ 00:17:25.385 { 00:17:25.385 "name": "ftl0", 00:17:25.385 "aliases": [ 00:17:25.385 "9b2cd339-7abf-4e83-9f94-676cad6f00bd" 00:17:25.385 ], 00:17:25.385 "product_name": "FTL disk", 00:17:25.385 "block_size": 4096, 00:17:25.385 "num_blocks": 20971520, 00:17:25.385 "uuid": "9b2cd339-7abf-4e83-9f94-676cad6f00bd", 00:17:25.385 "assigned_rate_limits": { 00:17:25.385 "rw_ios_per_sec": 0, 00:17:25.385 "rw_mbytes_per_sec": 0, 00:17:25.385 "r_mbytes_per_sec": 0, 00:17:25.385 "w_mbytes_per_sec": 0 00:17:25.385 }, 00:17:25.385 "claimed": false, 00:17:25.385 "zoned": false, 00:17:25.385 "supported_io_types": { 00:17:25.385 "read": true, 00:17:25.385 "write": true, 00:17:25.385 "unmap": true, 00:17:25.385 "flush": true, 00:17:25.385 "reset": false, 00:17:25.385 "nvme_admin": false, 00:17:25.385 "nvme_io": false, 00:17:25.385 "nvme_io_md": false, 00:17:25.385 "write_zeroes": true, 00:17:25.385 "zcopy": false, 00:17:25.385 "get_zone_info": false, 00:17:25.385 "zone_management": false, 00:17:25.385 "zone_append": false, 00:17:25.385 "compare": false, 00:17:25.385 "compare_and_write": false, 00:17:25.385 "abort": false, 00:17:25.385 "seek_hole": false, 00:17:25.385 "seek_data": false, 00:17:25.385 "copy": false, 00:17:25.385 "nvme_iov_md": false 00:17:25.385 }, 00:17:25.385 "driver_specific": { 00:17:25.385 "ftl": { 00:17:25.385 "base_bdev": "6b564208-5ee3-4ab5-90d7-f22fe8f11dbf", 00:17:25.385 "cache": "nvc0n1p0" 00:17:25.385 } 00:17:25.385 } 00:17:25.385 } 00:17:25.385 ] 00:17:25.385 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:25.385 18:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:25.385 18:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:25.385 18:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:25.386 18:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:25.644 [2024-11-29 18:30:45.456056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.456171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:25.645 [2024-11-29 18:30:45.456189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.645 [2024-11-29 18:30:45.456196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.456232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.645 [2024-11-29 18:30:45.456678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.456699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:25.645 [2024-11-29 18:30:45.456707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:17:25.645 [2024-11-29 18:30:45.456727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.457103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.457118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:25.645 [2024-11-29 18:30:45.457126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:25.645 [2024-11-29 18:30:45.457135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.459557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.459578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:25.645 [2024-11-29 18:30:45.459585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:17:25.645 [2024-11-29 18:30:45.459603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.464177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.464202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:25.645 [2024-11-29 18:30:45.464210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:17:25.645 [2024-11-29 18:30:45.464218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.465543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.465575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:25.645 [2024-11-29 18:30:45.465582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:17:25.645 [2024-11-29 18:30:45.465590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.469208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.469241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:25.645 [2024-11-29 18:30:45.469249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:17:25.645 [2024-11-29 18:30:45.469256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.469390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.469400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:25.645 [2024-11-29 18:30:45.469407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:25.645 [2024-11-29 18:30:45.469414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.470994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.471094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:25.645 [2024-11-29 18:30:45.471106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:17:25.645 [2024-11-29 18:30:45.471113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.472248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.472274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:25.645 [2024-11-29 18:30:45.472281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:17:25.645 [2024-11-29 18:30:45.472289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.473232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.473262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:25.645 [2024-11-29 18:30:45.473269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:17:25.645 [2024-11-29 18:30:45.473276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.474207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.645 [2024-11-29 18:30:45.474305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:25.645 [2024-11-29 18:30:45.474316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:17:25.645 [2024-11-29 18:30:45.474323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.645 [2024-11-29 18:30:45.474355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:25.645 [2024-11-29 18:30:45.474367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:25.645 [2024-11-29 18:30:45.474708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.474995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:25.646 [2024-11-29 18:30:45.475085] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:25.646 [2024-11-29 18:30:45.475091] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b2cd339-7abf-4e83-9f94-676cad6f00bd 00:17:25.646 [2024-11-29 18:30:45.475099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:25.646 [2024-11-29 18:30:45.475112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:25.646 [2024-11-29 18:30:45.475119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:25.646 [2024-11-29 18:30:45.475125] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:25.646 [2024-11-29 18:30:45.475139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:25.646 [2024-11-29 18:30:45.475146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:25.646 [2024-11-29 18:30:45.475153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:25.646 [2024-11-29 18:30:45.475157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:25.646 [2024-11-29 18:30:45.475163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:25.646 [2024-11-29 18:30:45.475169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.646 [2024-11-29 18:30:45.475176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:25.646 [2024-11-29 18:30:45.475183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:17:25.646 [2024-11-29 18:30:45.475191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.476597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.646 [2024-11-29 18:30:45.476616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:25.646 [2024-11-29 18:30:45.476624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:17:25.646 [2024-11-29 18:30:45.476631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.476704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.646 [2024-11-29 18:30:45.476729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:25.646 [2024-11-29 18:30:45.476737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:25.646 [2024-11-29 18:30:45.476752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.481585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.481623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.646 [2024-11-29 18:30:45.481631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.646 [2024-11-29 18:30:45.481638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.481686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.481694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.646 [2024-11-29 18:30:45.481704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.646 [2024-11-29 18:30:45.481711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.481783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.481795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.646 [2024-11-29 18:30:45.481802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.646 [2024-11-29 18:30:45.481810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.481834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.481842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.646 [2024-11-29 18:30:45.481848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.646 [2024-11-29 18:30:45.481858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.490521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.490558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.646 [2024-11-29 18:30:45.490566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.646 [2024-11-29 18:30:45.490574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.646 [2024-11-29 18:30:45.497665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.646 [2024-11-29 18:30:45.497702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.647 [2024-11-29 18:30:45.497710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.497719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.497785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.497797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.647 [2024-11-29 18:30:45.497804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.497811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.497874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.497885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.647 [2024-11-29 18:30:45.497891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.497898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.497968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.497979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.647 [2024-11-29 18:30:45.497985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.497993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.498031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.498040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:25.647 [2024-11-29 18:30:45.498046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.498053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.498103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.498115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.647 [2024-11-29 18:30:45.498121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.498129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.498171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.647 [2024-11-29 18:30:45.498181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.647 [2024-11-29 18:30:45.498187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.647 [2024-11-29 18:30:45.498195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.647 [2024-11-29 18:30:45.498332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.257 ms, result 0 00:17:25.647 true 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86491 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86491 ']' 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86491 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86491 00:17:25.647 killing process with pid 86491 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86491' 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86491 00:17:25.647 18:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86491 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.915 18:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:30.915 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:30.915 fio-3.35 00:17:30.915 Starting 1 thread 00:17:35.122 00:17:35.122 test: (groupid=0, jobs=1): err= 0: pid=86649: Fri Nov 29 18:30:54 2024 00:17:35.122 read: IOPS=1054, BW=70.0MiB/s (73.4MB/s)(255MiB/3635msec) 00:17:35.122 slat (nsec): min=3043, max=22357, avg=5334.78, stdev=1895.69 00:17:35.122 clat (usec): min=272, max=2528, avg=427.41, stdev=129.34 00:17:35.122 lat (usec): min=275, max=2533, avg=432.74, stdev=129.49 00:17:35.122 clat percentiles (usec): 00:17:35.122 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:17:35.122 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 396], 60.00th=[ 429], 00:17:35.122 | 70.00th=[ 465], 80.00th=[ 529], 90.00th=[ 603], 95.00th=[ 668], 00:17:35.122 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 1205], 99.95th=[ 2409], 00:17:35.122 | 99.99th=[ 2540] 00:17:35.122 write: IOPS=1062, BW=70.5MiB/s (74.0MB/s)(256MiB/3630msec); 0 zone resets 00:17:35.122 slat (nsec): min=14100, max=45109, avg=18874.93, stdev=2682.57 00:17:35.122 clat (usec): min=292, max=1450, avg=480.29, stdev=142.68 00:17:35.122 lat (usec): min=309, max=1466, avg=499.16, stdev=142.75 00:17:35.122 clat percentiles (usec): 00:17:35.122 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 347], 20.00th=[ 355], 00:17:35.122 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 478], 60.00th=[ 494], 00:17:35.122 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 685], 95.00th=[ 758], 00:17:35.122 | 99.00th=[ 922], 99.50th=[ 1020], 99.90th=[ 1221], 99.95th=[ 1303], 00:17:35.122 | 99.99th=[ 1450] 00:17:35.122 bw ( KiB/s): min=57120, max=87176, per=100.00%, avg=73012.57, stdev=11189.92, samples=7 00:17:35.122 iops : min= 840, max= 1282, avg=1073.71, stdev=164.56, samples=7 00:17:35.122 lat (usec) : 500=68.58%, 750=27.73%, 1000=3.28% 00:17:35.122 lat (msec) : 2=0.39%, 4=0.03% 00:17:35.122 cpu : usr=99.26%, sys=0.11%, ctx=15, majf=0, minf=1181 00:17:35.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:35.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.122 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:35.122 00:17:35.122 Run status group 0 (all jobs): 00:17:35.123 READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=255MiB (267MB), run=3635-3635msec 00:17:35.123 WRITE: bw=70.5MiB/s (74.0MB/s), 70.5MiB/s-70.5MiB/s (74.0MB/s-74.0MB/s), io=256MiB (269MB), run=3630-3630msec 00:17:35.692 ----------------------------------------------------- 00:17:35.692 Suppressions used: 00:17:35.692 count bytes template 00:17:35.692 1 5 /usr/src/fio/parse.c 00:17:35.692 1 8 libtcmalloc_minimal.so 00:17:35.692 1 904 libcrypto.so 00:17:35.692 ----------------------------------------------------- 00:17:35.692 00:17:35.692 18:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:35.692 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.692 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.692 18:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:35.692 18:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:35.693 18:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.954 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.954 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.954 fio-3.35 00:17:35.954 Starting 2 threads 00:18:02.510 00:18:02.510 first_half: (groupid=0, jobs=1): err= 0: pid=86735: Fri Nov 29 18:31:18 2024 00:18:02.510 read: IOPS=2940, BW=11.5MiB/s (12.0MB/s)(256MiB/22263msec) 00:18:02.510 slat (usec): min=3, max=128, avg= 5.18, stdev= 1.14 00:18:02.510 clat (usec): min=1167, max=489082, avg=36909.55, stdev=24929.93 00:18:02.510 lat (usec): min=1171, max=489108, avg=36914.73, stdev=24930.12 00:18:02.510 clat percentiles (msec): 00:18:02.510 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:18:02.510 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:18:02.510 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 68], 00:18:02.510 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 368], 99.95th=[ 460], 00:18:02.510 | 99.99th=[ 489] 00:18:02.510 write: IOPS=2950, BW=11.5MiB/s (12.1MB/s)(256MiB/22210msec); 0 zone resets 00:18:02.510 slat (usec): min=3, max=954, avg= 6.49, stdev= 6.09 00:18:02.510 clat (usec): min=352, max=42231, avg=6583.66, stdev=6573.06 00:18:02.510 lat (usec): min=364, max=42236, avg=6590.16, stdev=6573.23 00:18:02.510 clat percentiles (usec): 00:18:02.510 | 1.00th=[ 725], 5.00th=[ 898], 10.00th=[ 1123], 20.00th=[ 2409], 00:18:02.510 | 30.00th=[ 3195], 40.00th=[ 3949], 50.00th=[ 4752], 60.00th=[ 5407], 00:18:02.510 | 70.00th=[ 5997], 80.00th=[ 9634], 90.00th=[13829], 95.00th=[23200], 00:18:02.510 | 99.00th=[31327], 99.50th=[33817], 99.90th=[38011], 99.95th=[39060], 00:18:02.510 | 99.99th=[41681] 00:18:02.510 bw ( KiB/s): min= 2144, max=48208, per=100.00%, avg=26038.40, stdev=13191.91, samples=20 00:18:02.510 iops : min= 536, max=12052, avg=6509.70, stdev=3298.09, samples=20 00:18:02.510 lat (usec) : 500=0.03%, 750=0.61%, 1000=3.21% 00:18:02.510 lat (msec) : 2=4.83%, 4=11.70%, 10=20.26%, 20=7.67%, 50=48.38% 00:18:02.510 lat (msec) : 100=1.60%, 250=1.64%, 500=0.07% 00:18:02.510 cpu : usr=99.35%, sys=0.12%, ctx=33, majf=0, minf=5549 00:18:02.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:02.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.510 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.510 issued rwts: total=65469,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.510 second_half: (groupid=0, jobs=1): err= 0: pid=86736: Fri Nov 29 18:31:18 2024 00:18:02.510 read: IOPS=2982, BW=11.7MiB/s (12.2MB/s)(256MiB/21957msec) 00:18:02.510 slat (nsec): min=3135, max=25708, avg=4301.70, stdev=1040.11 00:18:02.510 clat (msec): min=10, max=451, avg=37.00, stdev=20.81 00:18:02.510 lat (msec): min=10, max=451, avg=37.00, stdev=20.81 00:18:02.510 clat percentiles (msec): 00:18:02.510 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:18:02.510 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:18:02.510 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 65], 00:18:02.510 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 236], 00:18:02.510 | 99.99th=[ 439] 00:18:02.510 write: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(256MiB/21831msec); 0 zone resets 00:18:02.510 slat (usec): min=3, max=3195, avg= 5.90, stdev=16.62 00:18:02.510 clat (usec): min=327, max=36430, avg=5894.82, stdev=4515.44 00:18:02.510 lat (usec): min=345, max=36435, avg=5900.72, stdev=4516.45 00:18:02.510 clat percentiles (usec): 00:18:02.511 | 1.00th=[ 783], 5.00th=[ 1500], 10.00th=[ 2343], 20.00th=[ 2900], 00:18:02.511 | 30.00th=[ 3556], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5342], 00:18:02.511 | 70.00th=[ 5669], 80.00th=[ 6915], 90.00th=[11731], 95.00th=[15401], 00:18:02.511 | 99.00th=[23200], 99.50th=[29754], 99.90th=[33162], 99.95th=[34341], 00:18:02.511 | 99.99th=[35914] 00:18:02.511 bw ( KiB/s): min= 1008, max=47568, per=100.00%, avg=26034.00, stdev=14617.16, samples=20 00:18:02.511 iops : min= 252, max=11892, avg=6508.50, stdev=3654.29, samples=20 00:18:02.511 lat (usec) : 500=0.03%, 750=0.32%, 1000=0.82% 00:18:02.511 lat (msec) : 2=2.34%, 4=14.67%, 10=24.32%, 20=6.65%, 50=47.56% 00:18:02.511 lat (msec) : 100=1.64%, 250=1.62%, 500=0.02% 00:18:02.511 cpu : usr=99.19%, sys=0.15%, ctx=36, majf=0, minf=5591 00:18:02.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:02.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.511 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.511 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.511 00:18:02.511 Run status group 0 (all jobs): 00:18:02.511 READ: bw=23.0MiB/s (24.1MB/s), 11.5MiB/s-11.7MiB/s (12.0MB/s-12.2MB/s), io=512MiB (536MB), run=21957-22263msec 00:18:02.511 WRITE: bw=23.1MiB/s (24.2MB/s), 11.5MiB/s-11.7MiB/s (12.1MB/s-12.3MB/s), io=512MiB (537MB), run=21831-22210msec 00:18:02.511 ----------------------------------------------------- 00:18:02.511 Suppressions used: 00:18:02.511 count bytes template 00:18:02.511 2 10 /usr/src/fio/parse.c 00:18:02.511 4 384 /usr/src/fio/iolog.c 00:18:02.511 1 8 libtcmalloc_minimal.so 00:18:02.511 1 904 libcrypto.so 00:18:02.511 ----------------------------------------------------- 00:18:02.511 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.511 18:31:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:02.511 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.511 fio-3.35 00:18:02.511 Starting 1 thread 00:18:17.422 00:18:17.422 test: (groupid=0, jobs=1): err= 0: pid=87027: Fri Nov 29 18:31:36 2024 00:18:17.422 read: IOPS=6463, BW=25.2MiB/s (26.5MB/s)(255MiB/10087msec) 00:18:17.422 slat (usec): min=3, max=398, avg= 6.41, stdev= 3.22 00:18:17.422 clat (usec): min=477, max=33000, avg=19790.51, stdev=3418.20 00:18:17.422 lat (usec): min=484, max=33004, avg=19796.92, stdev=3419.03 00:18:17.422 clat percentiles (usec): 00:18:17.422 | 1.00th=[14353], 5.00th=[14615], 10.00th=[15139], 20.00th=[16909], 00:18:17.422 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19530], 60.00th=[20579], 00:18:17.422 | 70.00th=[21627], 80.00th=[22676], 90.00th=[24249], 95.00th=[25822], 00:18:17.422 | 99.00th=[29230], 99.50th=[30278], 99.90th=[31327], 99.95th=[31589], 00:18:17.422 | 99.99th=[32375] 00:18:17.422 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(256MiB/5626msec); 0 zone resets 00:18:17.422 slat (usec): min=4, max=1093, avg= 6.29, stdev= 5.99 00:18:17.422 clat (usec): min=490, max=60729, avg=10928.53, stdev=12486.44 00:18:17.422 lat (usec): min=495, max=60737, avg=10934.81, stdev=12486.57 00:18:17.422 clat percentiles (usec): 00:18:17.422 | 1.00th=[ 717], 5.00th=[ 906], 10.00th=[ 1037], 20.00th=[ 1254], 00:18:17.422 | 30.00th=[ 1680], 40.00th=[ 2606], 50.00th=[ 7767], 60.00th=[ 9503], 00:18:17.422 | 70.00th=[11600], 80.00th=[15270], 90.00th=[31065], 95.00th=[41681], 00:18:17.422 | 99.00th=[49546], 99.50th=[52167], 99.90th=[56886], 99.95th=[57410], 00:18:17.422 | 99.99th=[59507] 00:18:17.422 bw ( KiB/s): min=12296, max=61312, per=93.77%, avg=43690.67, stdev=13258.81, samples=12 00:18:17.422 iops : min= 3074, max=15328, avg=10922.67, stdev=3314.70, samples=12 00:18:17.422 lat (usec) : 500=0.01%, 750=0.71%, 1000=3.59% 00:18:17.422 lat (msec) : 2=13.30%, 4=3.28%, 10=10.68%, 20=38.14%, 50=29.86% 00:18:17.422 lat (msec) : 100=0.45% 00:18:17.422 cpu : usr=98.75%, sys=0.20%, ctx=31, majf=0, minf=5577 00:18:17.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:17.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.422 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.422 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.422 00:18:17.422 Run status group 0 (all jobs): 00:18:17.422 READ: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10087-10087msec 00:18:17.422 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=256MiB (268MB), run=5626-5626msec 00:18:17.993 ----------------------------------------------------- 00:18:17.993 Suppressions used: 00:18:17.993 count bytes template 00:18:17.993 1 5 /usr/src/fio/parse.c 00:18:17.993 2 192 /usr/src/fio/iolog.c 00:18:17.993 1 8 libtcmalloc_minimal.so 00:18:17.993 1 904 libcrypto.so 00:18:17.993 ----------------------------------------------------- 00:18:17.993 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.993 Remove shared memory files 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69482 /dev/shm/spdk_tgt_trace.pid85431 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:17.993 ************************************ 00:18:17.993 END TEST ftl_fio_basic 00:18:17.993 ************************************ 00:18:17.993 00:18:17.993 real 0m59.497s 00:18:17.993 user 2m12.195s 00:18:17.993 sys 0m2.655s 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.993 18:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.255 18:31:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:18.255 18:31:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:18.255 18:31:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.255 18:31:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:18.255 ************************************ 00:18:18.255 START TEST ftl_bdevperf 00:18:18.255 ************************************ 00:18:18.255 18:31:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:18.255 * Looking for test storage... 00:18:18.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.255 --rc genhtml_branch_coverage=1 00:18:18.255 --rc genhtml_function_coverage=1 00:18:18.255 --rc genhtml_legend=1 00:18:18.255 --rc geninfo_all_blocks=1 00:18:18.255 --rc geninfo_unexecuted_blocks=1 00:18:18.255 00:18:18.255 ' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.255 --rc genhtml_branch_coverage=1 00:18:18.255 --rc genhtml_function_coverage=1 00:18:18.255 --rc genhtml_legend=1 00:18:18.255 --rc geninfo_all_blocks=1 00:18:18.255 --rc geninfo_unexecuted_blocks=1 00:18:18.255 00:18:18.255 ' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.255 --rc genhtml_branch_coverage=1 00:18:18.255 --rc genhtml_function_coverage=1 00:18:18.255 --rc genhtml_legend=1 00:18:18.255 --rc geninfo_all_blocks=1 00:18:18.255 --rc geninfo_unexecuted_blocks=1 00:18:18.255 00:18:18.255 ' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.255 --rc genhtml_branch_coverage=1 00:18:18.255 --rc genhtml_function_coverage=1 00:18:18.255 --rc genhtml_legend=1 00:18:18.255 --rc geninfo_all_blocks=1 00:18:18.255 --rc geninfo_unexecuted_blocks=1 00:18:18.255 00:18:18.255 ' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:18.255 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87282 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87282 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87282 ']' 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.256 18:31:38 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.516 [2024-11-29 18:31:38.172868] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:18:18.516 [2024-11-29 18:31:38.173235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87282 ] 00:18:18.516 [2024-11-29 18:31:38.333407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.516 [2024-11-29 18:31:38.362058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:19.457 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:19.718 { 00:18:19.718 "name": "nvme0n1", 00:18:19.718 "aliases": [ 00:18:19.718 "b1b0d341-6d57-43c7-ab8a-84e250cf1906" 00:18:19.718 ], 00:18:19.718 "product_name": "NVMe disk", 00:18:19.718 "block_size": 4096, 00:18:19.718 "num_blocks": 1310720, 00:18:19.718 "uuid": "b1b0d341-6d57-43c7-ab8a-84e250cf1906", 00:18:19.718 "numa_id": -1, 00:18:19.718 "assigned_rate_limits": { 00:18:19.718 "rw_ios_per_sec": 0, 00:18:19.718 "rw_mbytes_per_sec": 0, 00:18:19.718 "r_mbytes_per_sec": 0, 00:18:19.718 "w_mbytes_per_sec": 0 00:18:19.718 }, 00:18:19.718 "claimed": true, 00:18:19.718 "claim_type": "read_many_write_one", 00:18:19.718 "zoned": false, 00:18:19.718 "supported_io_types": { 00:18:19.718 "read": true, 00:18:19.718 "write": true, 00:18:19.718 "unmap": true, 00:18:19.718 "flush": true, 00:18:19.718 "reset": true, 00:18:19.718 "nvme_admin": true, 00:18:19.718 "nvme_io": true, 00:18:19.718 "nvme_io_md": false, 00:18:19.718 "write_zeroes": true, 00:18:19.718 "zcopy": false, 00:18:19.718 "get_zone_info": false, 00:18:19.718 "zone_management": false, 00:18:19.718 "zone_append": false, 00:18:19.718 "compare": true, 00:18:19.718 "compare_and_write": false, 00:18:19.718 "abort": true, 00:18:19.718 "seek_hole": false, 00:18:19.718 "seek_data": false, 00:18:19.718 "copy": true, 00:18:19.718 "nvme_iov_md": false 00:18:19.718 }, 00:18:19.718 "driver_specific": { 00:18:19.718 "nvme": [ 00:18:19.718 { 00:18:19.718 "pci_address": "0000:00:11.0", 00:18:19.718 "trid": { 00:18:19.718 "trtype": "PCIe", 00:18:19.718 "traddr": "0000:00:11.0" 00:18:19.718 }, 00:18:19.718 "ctrlr_data": { 00:18:19.718 "cntlid": 0, 00:18:19.718 "vendor_id": "0x1b36", 00:18:19.718 "model_number": "QEMU NVMe Ctrl", 00:18:19.718 "serial_number": "12341", 00:18:19.718 "firmware_revision": "8.0.0", 00:18:19.718 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:19.718 "oacs": { 00:18:19.718 "security": 0, 00:18:19.718 "format": 1, 00:18:19.718 "firmware": 0, 00:18:19.718 "ns_manage": 1 00:18:19.718 }, 00:18:19.718 "multi_ctrlr": false, 00:18:19.718 "ana_reporting": false 00:18:19.718 }, 00:18:19.718 "vs": { 00:18:19.718 "nvme_version": "1.4" 00:18:19.718 }, 00:18:19.718 "ns_data": { 00:18:19.718 "id": 1, 00:18:19.718 "can_share": false 00:18:19.718 } 00:18:19.718 } 00:18:19.718 ], 00:18:19.718 "mp_policy": "active_passive" 00:18:19.718 } 00:18:19.718 } 00:18:19.718 ]' 00:18:19.718 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:19.979 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:20.241 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fb29b290-68e1-4587-8076-d024f3c165a1 00:18:20.241 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:20.241 18:31:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb29b290-68e1-4587-8076-d024f3c165a1 00:18:20.241 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:20.502 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6399e6b4-1b3d-478e-81fb-6a78e12b0443 00:18:20.502 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6399e6b4-1b3d-478e-81fb-6a78e12b0443 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=99ea3cf0-8bad-4a69-adab-17504375e879 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=99ea3cf0-8bad-4a69-adab-17504375e879 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=99ea3cf0-8bad-4a69-adab-17504375e879 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:20.763 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.024 { 00:18:21.024 "name": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:21.024 "aliases": [ 00:18:21.024 "lvs/nvme0n1p0" 00:18:21.024 ], 00:18:21.024 "product_name": "Logical Volume", 00:18:21.024 "block_size": 4096, 00:18:21.024 "num_blocks": 26476544, 00:18:21.024 "uuid": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:21.024 "assigned_rate_limits": { 00:18:21.024 "rw_ios_per_sec": 0, 00:18:21.024 "rw_mbytes_per_sec": 0, 00:18:21.024 "r_mbytes_per_sec": 0, 00:18:21.024 "w_mbytes_per_sec": 0 00:18:21.024 }, 00:18:21.024 "claimed": false, 00:18:21.024 "zoned": false, 00:18:21.024 "supported_io_types": { 00:18:21.024 "read": true, 00:18:21.024 "write": true, 00:18:21.024 "unmap": true, 00:18:21.024 "flush": false, 00:18:21.024 "reset": true, 00:18:21.024 "nvme_admin": false, 00:18:21.024 "nvme_io": false, 00:18:21.024 "nvme_io_md": false, 00:18:21.024 "write_zeroes": true, 00:18:21.024 "zcopy": false, 00:18:21.024 "get_zone_info": false, 00:18:21.024 "zone_management": false, 00:18:21.024 "zone_append": false, 00:18:21.024 "compare": false, 00:18:21.024 "compare_and_write": false, 00:18:21.024 "abort": false, 00:18:21.024 "seek_hole": true, 00:18:21.024 "seek_data": true, 00:18:21.024 "copy": false, 00:18:21.024 "nvme_iov_md": false 00:18:21.024 }, 00:18:21.024 "driver_specific": { 00:18:21.024 "lvol": { 00:18:21.024 "lvol_store_uuid": "6399e6b4-1b3d-478e-81fb-6a78e12b0443", 00:18:21.024 "base_bdev": "nvme0n1", 00:18:21.024 "thin_provision": true, 00:18:21.024 "num_allocated_clusters": 0, 00:18:21.024 "snapshot": false, 00:18:21.024 "clone": false, 00:18:21.024 "esnap_clone": false 00:18:21.024 } 00:18:21.024 } 00:18:21.024 } 00:18:21.024 ]' 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:21.024 18:31:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:21.286 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.548 { 00:18:21.548 "name": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:21.548 "aliases": [ 00:18:21.548 "lvs/nvme0n1p0" 00:18:21.548 ], 00:18:21.548 "product_name": "Logical Volume", 00:18:21.548 "block_size": 4096, 00:18:21.548 "num_blocks": 26476544, 00:18:21.548 "uuid": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:21.548 "assigned_rate_limits": { 00:18:21.548 "rw_ios_per_sec": 0, 00:18:21.548 "rw_mbytes_per_sec": 0, 00:18:21.548 "r_mbytes_per_sec": 0, 00:18:21.548 "w_mbytes_per_sec": 0 00:18:21.548 }, 00:18:21.548 "claimed": false, 00:18:21.548 "zoned": false, 00:18:21.548 "supported_io_types": { 00:18:21.548 "read": true, 00:18:21.548 "write": true, 00:18:21.548 "unmap": true, 00:18:21.548 "flush": false, 00:18:21.548 "reset": true, 00:18:21.548 "nvme_admin": false, 00:18:21.548 "nvme_io": false, 00:18:21.548 "nvme_io_md": false, 00:18:21.548 "write_zeroes": true, 00:18:21.548 "zcopy": false, 00:18:21.548 "get_zone_info": false, 00:18:21.548 "zone_management": false, 00:18:21.548 "zone_append": false, 00:18:21.548 "compare": false, 00:18:21.548 "compare_and_write": false, 00:18:21.548 "abort": false, 00:18:21.548 "seek_hole": true, 00:18:21.548 "seek_data": true, 00:18:21.548 "copy": false, 00:18:21.548 "nvme_iov_md": false 00:18:21.548 }, 00:18:21.548 "driver_specific": { 00:18:21.548 "lvol": { 00:18:21.548 "lvol_store_uuid": "6399e6b4-1b3d-478e-81fb-6a78e12b0443", 00:18:21.548 "base_bdev": "nvme0n1", 00:18:21.548 "thin_provision": true, 00:18:21.548 "num_allocated_clusters": 0, 00:18:21.548 "snapshot": false, 00:18:21.548 "clone": false, 00:18:21.548 "esnap_clone": false 00:18:21.548 } 00:18:21.548 } 00:18:21.548 } 00:18:21.548 ]' 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:21.548 18:31:41 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:21.809 18:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=99ea3cf0-8bad-4a69-adab-17504375e879 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:21.810 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99ea3cf0-8bad-4a69-adab-17504375e879 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.072 { 00:18:22.072 "name": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:22.072 "aliases": [ 00:18:22.072 "lvs/nvme0n1p0" 00:18:22.072 ], 00:18:22.072 "product_name": "Logical Volume", 00:18:22.072 "block_size": 4096, 00:18:22.072 "num_blocks": 26476544, 00:18:22.072 "uuid": "99ea3cf0-8bad-4a69-adab-17504375e879", 00:18:22.072 "assigned_rate_limits": { 00:18:22.072 "rw_ios_per_sec": 0, 00:18:22.072 "rw_mbytes_per_sec": 0, 00:18:22.072 "r_mbytes_per_sec": 0, 00:18:22.072 "w_mbytes_per_sec": 0 00:18:22.072 }, 00:18:22.072 "claimed": false, 00:18:22.072 "zoned": false, 00:18:22.072 "supported_io_types": { 00:18:22.072 "read": true, 00:18:22.072 "write": true, 00:18:22.072 "unmap": true, 00:18:22.072 "flush": false, 00:18:22.072 "reset": true, 00:18:22.072 "nvme_admin": false, 00:18:22.072 "nvme_io": false, 00:18:22.072 "nvme_io_md": false, 00:18:22.072 "write_zeroes": true, 00:18:22.072 "zcopy": false, 00:18:22.072 "get_zone_info": false, 00:18:22.072 "zone_management": false, 00:18:22.072 "zone_append": false, 00:18:22.072 "compare": false, 00:18:22.072 "compare_and_write": false, 00:18:22.072 "abort": false, 00:18:22.072 "seek_hole": true, 00:18:22.072 "seek_data": true, 00:18:22.072 "copy": false, 00:18:22.072 "nvme_iov_md": false 00:18:22.072 }, 00:18:22.072 "driver_specific": { 00:18:22.072 "lvol": { 00:18:22.072 "lvol_store_uuid": "6399e6b4-1b3d-478e-81fb-6a78e12b0443", 00:18:22.072 "base_bdev": "nvme0n1", 00:18:22.072 "thin_provision": true, 00:18:22.072 "num_allocated_clusters": 0, 00:18:22.072 "snapshot": false, 00:18:22.072 "clone": false, 00:18:22.072 "esnap_clone": false 00:18:22.072 } 00:18:22.072 } 00:18:22.072 } 00:18:22.072 ]' 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:22.072 18:31:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 99ea3cf0-8bad-4a69-adab-17504375e879 -c nvc0n1p0 --l2p_dram_limit 20 00:18:22.334 [2024-11-29 18:31:42.085084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.085121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.334 [2024-11-29 18:31:42.085133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.334 [2024-11-29 18:31:42.085139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.085181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.085188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.334 [2024-11-29 18:31:42.085200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:22.334 [2024-11-29 18:31:42.085208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.085222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.334 [2024-11-29 18:31:42.085428] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.334 [2024-11-29 18:31:42.085442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.085449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.334 [2024-11-29 18:31:42.085470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:22.334 [2024-11-29 18:31:42.085479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.085501] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2c2b79dc-07b8-4933-af56-f9e3b8fe6fde 00:18:22.334 [2024-11-29 18:31:42.086474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.086495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:22.334 [2024-11-29 18:31:42.086503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:22.334 [2024-11-29 18:31:42.086512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.091102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.091130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.334 [2024-11-29 18:31:42.091138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:18:22.334 [2024-11-29 18:31:42.091147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.091201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.091209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.334 [2024-11-29 18:31:42.091219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:22.334 [2024-11-29 18:31:42.091228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.091259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.334 [2024-11-29 18:31:42.091268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.334 [2024-11-29 18:31:42.091274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:22.334 [2024-11-29 18:31:42.091281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.334 [2024-11-29 18:31:42.091295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.335 [2024-11-29 18:31:42.092529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.335 [2024-11-29 18:31:42.092640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.335 [2024-11-29 18:31:42.092654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:18:22.335 [2024-11-29 18:31:42.092660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.335 [2024-11-29 18:31:42.092685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.335 [2024-11-29 18:31:42.092691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.335 [2024-11-29 18:31:42.092700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:22.335 [2024-11-29 18:31:42.092706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.335 [2024-11-29 18:31:42.092718] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:22.335 [2024-11-29 18:31:42.092825] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.335 [2024-11-29 18:31:42.092836] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.335 [2024-11-29 18:31:42.092845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.335 [2024-11-29 18:31:42.092854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.335 [2024-11-29 18:31:42.092863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.335 [2024-11-29 18:31:42.092874] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.335 [2024-11-29 18:31:42.092880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.335 [2024-11-29 18:31:42.092886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.335 [2024-11-29 18:31:42.092893] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.335 [2024-11-29 18:31:42.092900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.335 [2024-11-29 18:31:42.092907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.335 [2024-11-29 18:31:42.092914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:18:22.335 [2024-11-29 18:31:42.092921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.335 [2024-11-29 18:31:42.092987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.335 [2024-11-29 18:31:42.092993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.335 [2024-11-29 18:31:42.093000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:22.335 [2024-11-29 18:31:42.093005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.335 [2024-11-29 18:31:42.093076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.335 [2024-11-29 18:31:42.093087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.335 [2024-11-29 18:31:42.093095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.335 [2024-11-29 18:31:42.093114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.335 [2024-11-29 18:31:42.093132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.335 [2024-11-29 18:31:42.093143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.335 [2024-11-29 18:31:42.093148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.335 [2024-11-29 18:31:42.093156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.335 [2024-11-29 18:31:42.093163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.335 [2024-11-29 18:31:42.093170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.335 [2024-11-29 18:31:42.093175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.335 [2024-11-29 18:31:42.093187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.335 [2024-11-29 18:31:42.093205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.335 [2024-11-29 18:31:42.093221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.335 [2024-11-29 18:31:42.093241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.335 [2024-11-29 18:31:42.093262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.335 [2024-11-29 18:31:42.093282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.335 [2024-11-29 18:31:42.093295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.335 [2024-11-29 18:31:42.093301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.335 [2024-11-29 18:31:42.093307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.335 [2024-11-29 18:31:42.093313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.335 [2024-11-29 18:31:42.093320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.335 [2024-11-29 18:31:42.093326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.335 [2024-11-29 18:31:42.093339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.335 [2024-11-29 18:31:42.093345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093350] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.335 [2024-11-29 18:31:42.093363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.335 [2024-11-29 18:31:42.093371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.335 [2024-11-29 18:31:42.093386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.335 [2024-11-29 18:31:42.093393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.335 [2024-11-29 18:31:42.093398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.335 [2024-11-29 18:31:42.093406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.335 [2024-11-29 18:31:42.093412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.335 [2024-11-29 18:31:42.093419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.335 [2024-11-29 18:31:42.093428] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.335 [2024-11-29 18:31:42.093438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.335 [2024-11-29 18:31:42.093467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.335 [2024-11-29 18:31:42.093474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.335 [2024-11-29 18:31:42.093482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.335 [2024-11-29 18:31:42.093488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.335 [2024-11-29 18:31:42.093497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.335 [2024-11-29 18:31:42.093503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.335 [2024-11-29 18:31:42.093511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.335 [2024-11-29 18:31:42.093517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.335 [2024-11-29 18:31:42.093525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.335 [2024-11-29 18:31:42.093558] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.335 [2024-11-29 18:31:42.093568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.335 [2024-11-29 18:31:42.093583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.336 [2024-11-29 18:31:42.093589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.336 [2024-11-29 18:31:42.093599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.336 [2024-11-29 18:31:42.093605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.336 [2024-11-29 18:31:42.093614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.336 [2024-11-29 18:31:42.093620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:18:22.336 [2024-11-29 18:31:42.093627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.336 [2024-11-29 18:31:42.093649] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:22.336 [2024-11-29 18:31:42.093657] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:26.662 [2024-11-29 18:31:46.079942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.662 [2024-11-29 18:31:46.080026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:26.662 [2024-11-29 18:31:46.080046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3986.276 ms 00:18:26.662 [2024-11-29 18:31:46.080058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.662 [2024-11-29 18:31:46.093599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.662 [2024-11-29 18:31:46.093660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.662 [2024-11-29 18:31:46.093675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:18:26.662 [2024-11-29 18:31:46.093688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.662 [2024-11-29 18:31:46.093791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.662 [2024-11-29 18:31:46.093803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:26.662 [2024-11-29 18:31:46.093815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:26.662 [2024-11-29 18:31:46.093825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.662 [2024-11-29 18:31:46.118488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.662 [2024-11-29 18:31:46.118568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.662 [2024-11-29 18:31:46.118591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.609 ms 00:18:26.662 [2024-11-29 18:31:46.118608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.662 [2024-11-29 18:31:46.118673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.662 [2024-11-29 18:31:46.118697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.662 [2024-11-29 18:31:46.118713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:26.663 [2024-11-29 18:31:46.118730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.119356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.119408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.663 [2024-11-29 18:31:46.119427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:18:26.663 [2024-11-29 18:31:46.119449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.119668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.119686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.663 [2024-11-29 18:31:46.119705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:18:26.663 [2024-11-29 18:31:46.119721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.127901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.127953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.663 [2024-11-29 18:31:46.127963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.153 ms 00:18:26.663 [2024-11-29 18:31:46.127973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.138000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:26.663 [2024-11-29 18:31:46.145898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.145941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:26.663 [2024-11-29 18:31:46.145955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.856 ms 00:18:26.663 [2024-11-29 18:31:46.145963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.235652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.235708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:26.663 [2024-11-29 18:31:46.235726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.652 ms 00:18:26.663 [2024-11-29 18:31:46.235737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.235936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.235948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:26.663 [2024-11-29 18:31:46.235959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:18:26.663 [2024-11-29 18:31:46.235967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.241972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.242022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:26.663 [2024-11-29 18:31:46.242037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.965 ms 00:18:26.663 [2024-11-29 18:31:46.242045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.247033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.247083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:26.663 [2024-11-29 18:31:46.247097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.915 ms 00:18:26.663 [2024-11-29 18:31:46.247105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.247426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.247437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:26.663 [2024-11-29 18:31:46.247474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:26.663 [2024-11-29 18:31:46.247483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.296567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.296620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:26.663 [2024-11-29 18:31:46.296635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.043 ms 00:18:26.663 [2024-11-29 18:31:46.296644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.303838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.303888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:26.663 [2024-11-29 18:31:46.303902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.130 ms 00:18:26.663 [2024-11-29 18:31:46.303910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.309744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.309792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:26.663 [2024-11-29 18:31:46.309805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:18:26.663 [2024-11-29 18:31:46.309813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.315926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.315973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:26.663 [2024-11-29 18:31:46.315989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:18:26.663 [2024-11-29 18:31:46.315997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.316047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.316060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:26.663 [2024-11-29 18:31:46.316072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:26.663 [2024-11-29 18:31:46.316080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.316152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.663 [2024-11-29 18:31:46.316161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:26.663 [2024-11-29 18:31:46.316178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:26.663 [2024-11-29 18:31:46.316186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.663 [2024-11-29 18:31:46.317282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4231.721 ms, result 0 00:18:26.663 { 00:18:26.663 "name": "ftl0", 00:18:26.663 "uuid": "2c2b79dc-07b8-4933-af56-f9e3b8fe6fde" 00:18:26.663 } 00:18:26.663 18:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:26.663 18:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:26.663 18:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:26.925 18:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:26.925 [2024-11-29 18:31:46.651642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:26.925 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:26.925 Zero copy mechanism will not be used. 00:18:26.925 Running I/O for 4 seconds... 00:18:28.815 792.00 IOPS, 52.59 MiB/s [2024-11-29T18:31:49.664Z] 730.00 IOPS, 48.48 MiB/s [2024-11-29T18:31:51.055Z] 830.00 IOPS, 55.12 MiB/s [2024-11-29T18:31:51.055Z] 790.00 IOPS, 52.46 MiB/s 00:18:31.150 Latency(us) 00:18:31.150 [2024-11-29T18:31:51.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.150 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:31.150 ftl0 : 4.00 789.86 52.45 0.00 0.00 1330.88 324.53 3654.89 00:18:31.150 [2024-11-29T18:31:51.055Z] =================================================================================================================== 00:18:31.150 [2024-11-29T18:31:51.055Z] Total : 789.86 52.45 0.00 0.00 1330.88 324.53 3654.89 00:18:31.150 [2024-11-29 18:31:50.660683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:31.150 { 00:18:31.150 "results": [ 00:18:31.150 { 00:18:31.150 "job": "ftl0", 00:18:31.150 "core_mask": "0x1", 00:18:31.150 "workload": "randwrite", 00:18:31.150 "status": "finished", 00:18:31.150 "queue_depth": 1, 00:18:31.150 "io_size": 69632, 00:18:31.150 "runtime": 4.001988, 00:18:31.150 "iops": 789.8574408518966, 00:18:31.150 "mibps": 52.45147068157126, 00:18:31.150 "io_failed": 0, 00:18:31.150 "io_timeout": 0, 00:18:31.150 "avg_latency_us": 1330.8822816538095, 00:18:31.150 "min_latency_us": 324.52923076923076, 00:18:31.150 "max_latency_us": 3654.892307692308 00:18:31.150 } 00:18:31.150 ], 00:18:31.150 "core_count": 1 00:18:31.150 } 00:18:31.150 18:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:31.150 [2024-11-29 18:31:50.783930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:31.150 Running I/O for 4 seconds... 00:18:33.042 7054.00 IOPS, 27.55 MiB/s [2024-11-29T18:31:53.892Z] 6045.50 IOPS, 23.62 MiB/s [2024-11-29T18:31:55.282Z] 5612.67 IOPS, 21.92 MiB/s [2024-11-29T18:31:55.282Z] 5466.00 IOPS, 21.35 MiB/s 00:18:35.377 Latency(us) 00:18:35.377 [2024-11-29T18:31:55.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.377 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.377 ftl0 : 4.03 5450.84 21.29 0.00 0.00 23383.93 340.28 49807.36 00:18:35.377 [2024-11-29T18:31:55.282Z] =================================================================================================================== 00:18:35.377 [2024-11-29T18:31:55.282Z] Total : 5450.84 21.29 0.00 0.00 23383.93 0.00 49807.36 00:18:35.377 { 00:18:35.377 "results": [ 00:18:35.377 { 00:18:35.377 "job": "ftl0", 00:18:35.377 "core_mask": "0x1", 00:18:35.377 "workload": "randwrite", 00:18:35.377 "status": "finished", 00:18:35.377 "queue_depth": 128, 00:18:35.377 "io_size": 4096, 00:18:35.377 "runtime": 4.034611, 00:18:35.377 "iops": 5450.835284987822, 00:18:35.377 "mibps": 21.29232533198368, 00:18:35.377 "io_failed": 0, 00:18:35.377 "io_timeout": 0, 00:18:35.377 "avg_latency_us": 23383.927801438287, 00:18:35.377 "min_latency_us": 340.2830769230769, 00:18:35.377 "max_latency_us": 49807.36 00:18:35.377 } 00:18:35.377 ], 00:18:35.377 "core_count": 1 00:18:35.377 } 00:18:35.377 [2024-11-29 18:31:54.825317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:35.377 18:31:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:35.377 [2024-11-29 18:31:54.941606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:35.377 Running I/O for 4 seconds... 00:18:37.263 4348.00 IOPS, 16.98 MiB/s [2024-11-29T18:31:58.111Z] 5017.50 IOPS, 19.60 MiB/s [2024-11-29T18:31:59.055Z] 5097.33 IOPS, 19.91 MiB/s [2024-11-29T18:31:59.055Z] 5010.00 IOPS, 19.57 MiB/s 00:18:39.150 Latency(us) 00:18:39.150 [2024-11-29T18:31:59.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.150 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:39.150 Verification LBA range: start 0x0 length 0x1400000 00:18:39.150 ftl0 : 4.02 5022.09 19.62 0.00 0.00 25414.55 299.32 45774.38 00:18:39.150 [2024-11-29T18:31:59.055Z] =================================================================================================================== 00:18:39.150 [2024-11-29T18:31:59.055Z] Total : 5022.09 19.62 0.00 0.00 25414.55 0.00 45774.38 00:18:39.150 { 00:18:39.150 "results": [ 00:18:39.150 { 00:18:39.150 "job": "ftl0", 00:18:39.150 "core_mask": "0x1", 00:18:39.150 "workload": "verify", 00:18:39.150 "status": "finished", 00:18:39.150 "verify_range": { 00:18:39.150 "start": 0, 00:18:39.150 "length": 20971520 00:18:39.150 }, 00:18:39.150 "queue_depth": 128, 00:18:39.150 "io_size": 4096, 00:18:39.150 "runtime": 4.015854, 00:18:39.150 "iops": 5022.094926757795, 00:18:39.150 "mibps": 19.617558307647638, 00:18:39.150 "io_failed": 0, 00:18:39.150 "io_timeout": 0, 00:18:39.150 "avg_latency_us": 25414.548492966773, 00:18:39.150 "min_latency_us": 299.32307692307694, 00:18:39.150 "max_latency_us": 45774.375384615385 00:18:39.150 } 00:18:39.150 ], 00:18:39.150 "core_count": 1 00:18:39.150 } 00:18:39.150 [2024-11-29 18:31:58.966650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:39.150 18:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:39.411 [2024-11-29 18:31:59.182953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.411 [2024-11-29 18:31:59.183012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.411 [2024-11-29 18:31:59.183028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:39.411 [2024-11-29 18:31:59.183045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.411 [2024-11-29 18:31:59.183076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:39.411 [2024-11-29 18:31:59.183793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.411 [2024-11-29 18:31:59.183835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.411 [2024-11-29 18:31:59.183847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:18:39.411 [2024-11-29 18:31:59.183859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.411 [2024-11-29 18:31:59.187023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.411 [2024-11-29 18:31:59.187075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.411 [2024-11-29 18:31:59.187087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:18:39.411 [2024-11-29 18:31:59.187102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.674 [2024-11-29 18:31:59.400766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.674 [2024-11-29 18:31:59.400989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.674 [2024-11-29 18:31:59.401015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 213.645 ms 00:18:39.674 [2024-11-29 18:31:59.401027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.674 [2024-11-29 18:31:59.407272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.674 [2024-11-29 18:31:59.407318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.674 [2024-11-29 18:31:59.407329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:18:39.674 [2024-11-29 18:31:59.407340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.674 [2024-11-29 18:31:59.410295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.674 [2024-11-29 18:31:59.410354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.674 [2024-11-29 18:31:59.410365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:18:39.674 [2024-11-29 18:31:59.410374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.674 [2024-11-29 18:31:59.416597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.674 [2024-11-29 18:31:59.416656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.674 [2024-11-29 18:31:59.416668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:18:39.674 [2024-11-29 18:31:59.416683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.674 [2024-11-29 18:31:59.416804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.674 [2024-11-29 18:31:59.416825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.674 [2024-11-29 18:31:59.416833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:39.675 [2024-11-29 18:31:59.416844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.675 [2024-11-29 18:31:59.419886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.675 [2024-11-29 18:31:59.419938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.675 [2024-11-29 18:31:59.419949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:18:39.675 [2024-11-29 18:31:59.419960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.675 [2024-11-29 18:31:59.422945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.675 [2024-11-29 18:31:59.422998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.675 [2024-11-29 18:31:59.423008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:18:39.675 [2024-11-29 18:31:59.423018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.675 [2024-11-29 18:31:59.425001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.675 [2024-11-29 18:31:59.425051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.675 [2024-11-29 18:31:59.425062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:18:39.675 [2024-11-29 18:31:59.425074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.675 [2024-11-29 18:31:59.427144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.675 [2024-11-29 18:31:59.427207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.675 [2024-11-29 18:31:59.427217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:18:39.675 [2024-11-29 18:31:59.427227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.675 [2024-11-29 18:31:59.427266] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.675 [2024-11-29 18:31:59.427288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.675 [2024-11-29 18:31:59.427995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.676 [2024-11-29 18:31:59.428247] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.676 [2024-11-29 18:31:59.428261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2c2b79dc-07b8-4933-af56-f9e3b8fe6fde 00:18:39.676 [2024-11-29 18:31:59.428271] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.676 [2024-11-29 18:31:59.428279] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.676 [2024-11-29 18:31:59.428288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.676 [2024-11-29 18:31:59.428296] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.676 [2024-11-29 18:31:59.428307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.676 [2024-11-29 18:31:59.428315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.676 [2024-11-29 18:31:59.428325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.676 [2024-11-29 18:31:59.428331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.676 [2024-11-29 18:31:59.428339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.676 [2024-11-29 18:31:59.428347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.676 [2024-11-29 18:31:59.428359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.676 [2024-11-29 18:31:59.428370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:18:39.676 [2024-11-29 18:31:59.428379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.430563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.676 [2024-11-29 18:31:59.430596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.676 [2024-11-29 18:31:59.430606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:18:39.676 [2024-11-29 18:31:59.430616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.430756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.676 [2024-11-29 18:31:59.430772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.676 [2024-11-29 18:31:59.430781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:39.676 [2024-11-29 18:31:59.430794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.438332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.438533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.676 [2024-11-29 18:31:59.438554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.438564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.438627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.438641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.676 [2024-11-29 18:31:59.438649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.438660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.438737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.438749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.676 [2024-11-29 18:31:59.438757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.438767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.438786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.438795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.676 [2024-11-29 18:31:59.438805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.438820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.452536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.452592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.676 [2024-11-29 18:31:59.452604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.452614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.676 [2024-11-29 18:31:59.464192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.464202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.676 [2024-11-29 18:31:59.464297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.464307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.676 [2024-11-29 18:31:59.464376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.464392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.676 [2024-11-29 18:31:59.464522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.464532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.676 [2024-11-29 18:31:59.464588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.676 [2024-11-29 18:31:59.464598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.676 [2024-11-29 18:31:59.464638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.676 [2024-11-29 18:31:59.464649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.676 [2024-11-29 18:31:59.464658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.677 [2024-11-29 18:31:59.464668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.677 [2024-11-29 18:31:59.464713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.677 [2024-11-29 18:31:59.464726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.677 [2024-11-29 18:31:59.464744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.677 [2024-11-29 18:31:59.464760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.677 [2024-11-29 18:31:59.464902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 281.903 ms, result 0 00:18:39.677 true 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87282 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87282 ']' 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87282 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87282 00:18:39.677 killing process with pid 87282 00:18:39.677 Received shutdown signal, test time was about 4.000000 seconds 00:18:39.677 00:18:39.677 Latency(us) 00:18:39.677 [2024-11-29T18:31:59.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.677 [2024-11-29T18:31:59.582Z] =================================================================================================================== 00:18:39.677 [2024-11-29T18:31:59.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87282' 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87282 00:18:39.677 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87282 00:18:39.938 Remove shared memory files 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:39.938 ************************************ 00:18:39.938 END TEST ftl_bdevperf 00:18:39.938 ************************************ 00:18:39.938 00:18:39.938 real 0m21.891s 00:18:39.938 user 0m24.530s 00:18:39.938 sys 0m1.055s 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.938 18:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 18:31:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:40.200 18:31:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:40.200 18:31:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.200 18:31:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 ************************************ 00:18:40.200 START TEST ftl_trim 00:18:40.200 ************************************ 00:18:40.200 18:31:59 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:40.200 * Looking for test storage... 00:18:40.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.200 18:31:59 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.200 18:31:59 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.200 18:31:59 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.200 18:32:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.200 --rc genhtml_branch_coverage=1 00:18:40.200 --rc genhtml_function_coverage=1 00:18:40.200 --rc genhtml_legend=1 00:18:40.200 --rc geninfo_all_blocks=1 00:18:40.200 --rc geninfo_unexecuted_blocks=1 00:18:40.200 00:18:40.200 ' 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.200 --rc genhtml_branch_coverage=1 00:18:40.200 --rc genhtml_function_coverage=1 00:18:40.200 --rc genhtml_legend=1 00:18:40.200 --rc geninfo_all_blocks=1 00:18:40.200 --rc geninfo_unexecuted_blocks=1 00:18:40.200 00:18:40.200 ' 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.200 --rc genhtml_branch_coverage=1 00:18:40.200 --rc genhtml_function_coverage=1 00:18:40.200 --rc genhtml_legend=1 00:18:40.200 --rc geninfo_all_blocks=1 00:18:40.200 --rc geninfo_unexecuted_blocks=1 00:18:40.200 00:18:40.200 ' 00:18:40.200 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.200 --rc genhtml_branch_coverage=1 00:18:40.200 --rc genhtml_function_coverage=1 00:18:40.200 --rc genhtml_legend=1 00:18:40.200 --rc geninfo_all_blocks=1 00:18:40.200 --rc geninfo_unexecuted_blocks=1 00:18:40.200 00:18:40.200 ' 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:40.200 18:32:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87630 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87630 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87630 ']' 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.201 18:32:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.201 18:32:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:40.462 [2024-11-29 18:32:00.163032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:18:40.462 [2024-11-29 18:32:00.163430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87630 ] 00:18:40.462 [2024-11-29 18:32:00.325920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.462 [2024-11-29 18:32:00.357360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.462 [2024-11-29 18:32:00.357603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.462 [2024-11-29 18:32:00.357762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.407 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.407 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:41.407 18:32:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:41.668 18:32:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:41.669 18:32:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:41.669 18:32:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.669 { 00:18:41.669 "name": "nvme0n1", 00:18:41.669 "aliases": [ 00:18:41.669 "206dade0-5454-4301-bec7-d5f7aa8802bf" 00:18:41.669 ], 00:18:41.669 "product_name": "NVMe disk", 00:18:41.669 "block_size": 4096, 00:18:41.669 "num_blocks": 1310720, 00:18:41.669 "uuid": "206dade0-5454-4301-bec7-d5f7aa8802bf", 00:18:41.669 "numa_id": -1, 00:18:41.669 "assigned_rate_limits": { 00:18:41.669 "rw_ios_per_sec": 0, 00:18:41.669 "rw_mbytes_per_sec": 0, 00:18:41.669 "r_mbytes_per_sec": 0, 00:18:41.669 "w_mbytes_per_sec": 0 00:18:41.669 }, 00:18:41.669 "claimed": true, 00:18:41.669 "claim_type": "read_many_write_one", 00:18:41.669 "zoned": false, 00:18:41.669 "supported_io_types": { 00:18:41.669 "read": true, 00:18:41.669 "write": true, 00:18:41.669 "unmap": true, 00:18:41.669 "flush": true, 00:18:41.669 "reset": true, 00:18:41.669 "nvme_admin": true, 00:18:41.669 "nvme_io": true, 00:18:41.669 "nvme_io_md": false, 00:18:41.669 "write_zeroes": true, 00:18:41.669 "zcopy": false, 00:18:41.669 "get_zone_info": false, 00:18:41.669 "zone_management": false, 00:18:41.669 "zone_append": false, 00:18:41.669 "compare": true, 00:18:41.669 "compare_and_write": false, 00:18:41.669 "abort": true, 00:18:41.669 "seek_hole": false, 00:18:41.669 "seek_data": false, 00:18:41.669 "copy": true, 00:18:41.669 "nvme_iov_md": false 00:18:41.669 }, 00:18:41.669 "driver_specific": { 00:18:41.669 "nvme": [ 00:18:41.669 { 00:18:41.669 "pci_address": "0000:00:11.0", 00:18:41.669 "trid": { 00:18:41.669 "trtype": "PCIe", 00:18:41.669 "traddr": "0000:00:11.0" 00:18:41.669 }, 00:18:41.669 "ctrlr_data": { 00:18:41.669 "cntlid": 0, 00:18:41.669 "vendor_id": "0x1b36", 00:18:41.669 "model_number": "QEMU NVMe Ctrl", 00:18:41.669 "serial_number": "12341", 00:18:41.669 "firmware_revision": "8.0.0", 00:18:41.669 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:41.669 "oacs": { 00:18:41.669 "security": 0, 00:18:41.669 "format": 1, 00:18:41.669 "firmware": 0, 00:18:41.669 "ns_manage": 1 00:18:41.669 }, 00:18:41.669 "multi_ctrlr": false, 00:18:41.669 "ana_reporting": false 00:18:41.669 }, 00:18:41.669 "vs": { 00:18:41.669 "nvme_version": "1.4" 00:18:41.669 }, 00:18:41.669 "ns_data": { 00:18:41.669 "id": 1, 00:18:41.669 "can_share": false 00:18:41.669 } 00:18:41.669 } 00:18:41.669 ], 00:18:41.669 "mp_policy": "active_passive" 00:18:41.669 } 00:18:41.669 } 00:18:41.669 ]' 00:18:41.669 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.929 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.929 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.929 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:41.929 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:41.929 18:32:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6399e6b4-1b3d-478e-81fb-6a78e12b0443 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:41.929 18:32:01 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6399e6b4-1b3d-478e-81fb-6a78e12b0443 00:18:42.190 18:32:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:42.451 18:32:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=2300869b-4d61-4517-b49c-5f6c70ada65d 00:18:42.451 18:32:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2300869b-4d61-4517-b49c-5f6c70ada65d 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:42.713 18:32:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.713 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.713 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.713 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:42.713 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:42.713 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:42.974 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:42.974 { 00:18:42.974 "name": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:42.974 "aliases": [ 00:18:42.974 "lvs/nvme0n1p0" 00:18:42.974 ], 00:18:42.974 "product_name": "Logical Volume", 00:18:42.974 "block_size": 4096, 00:18:42.974 "num_blocks": 26476544, 00:18:42.974 "uuid": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:42.974 "assigned_rate_limits": { 00:18:42.974 "rw_ios_per_sec": 0, 00:18:42.974 "rw_mbytes_per_sec": 0, 00:18:42.974 "r_mbytes_per_sec": 0, 00:18:42.974 "w_mbytes_per_sec": 0 00:18:42.974 }, 00:18:42.974 "claimed": false, 00:18:42.974 "zoned": false, 00:18:42.974 "supported_io_types": { 00:18:42.974 "read": true, 00:18:42.974 "write": true, 00:18:42.974 "unmap": true, 00:18:42.974 "flush": false, 00:18:42.974 "reset": true, 00:18:42.974 "nvme_admin": false, 00:18:42.974 "nvme_io": false, 00:18:42.974 "nvme_io_md": false, 00:18:42.974 "write_zeroes": true, 00:18:42.974 "zcopy": false, 00:18:42.974 "get_zone_info": false, 00:18:42.974 "zone_management": false, 00:18:42.974 "zone_append": false, 00:18:42.974 "compare": false, 00:18:42.974 "compare_and_write": false, 00:18:42.974 "abort": false, 00:18:42.974 "seek_hole": true, 00:18:42.974 "seek_data": true, 00:18:42.974 "copy": false, 00:18:42.974 "nvme_iov_md": false 00:18:42.974 }, 00:18:42.974 "driver_specific": { 00:18:42.974 "lvol": { 00:18:42.974 "lvol_store_uuid": "2300869b-4d61-4517-b49c-5f6c70ada65d", 00:18:42.974 "base_bdev": "nvme0n1", 00:18:42.974 "thin_provision": true, 00:18:42.974 "num_allocated_clusters": 0, 00:18:42.975 "snapshot": false, 00:18:42.975 "clone": false, 00:18:42.975 "esnap_clone": false 00:18:42.975 } 00:18:42.975 } 00:18:42.975 } 00:18:42.975 ]' 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:42.975 18:32:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:42.975 18:32:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:42.975 18:32:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:42.975 18:32:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:43.236 18:32:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:43.237 18:32:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:43.237 18:32:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:43.237 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:43.237 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:43.237 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:43.237 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:43.237 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:43.498 { 00:18:43.498 "name": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:43.498 "aliases": [ 00:18:43.498 "lvs/nvme0n1p0" 00:18:43.498 ], 00:18:43.498 "product_name": "Logical Volume", 00:18:43.498 "block_size": 4096, 00:18:43.498 "num_blocks": 26476544, 00:18:43.498 "uuid": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:43.498 "assigned_rate_limits": { 00:18:43.498 "rw_ios_per_sec": 0, 00:18:43.498 "rw_mbytes_per_sec": 0, 00:18:43.498 "r_mbytes_per_sec": 0, 00:18:43.498 "w_mbytes_per_sec": 0 00:18:43.498 }, 00:18:43.498 "claimed": false, 00:18:43.498 "zoned": false, 00:18:43.498 "supported_io_types": { 00:18:43.498 "read": true, 00:18:43.498 "write": true, 00:18:43.498 "unmap": true, 00:18:43.498 "flush": false, 00:18:43.498 "reset": true, 00:18:43.498 "nvme_admin": false, 00:18:43.498 "nvme_io": false, 00:18:43.498 "nvme_io_md": false, 00:18:43.498 "write_zeroes": true, 00:18:43.498 "zcopy": false, 00:18:43.498 "get_zone_info": false, 00:18:43.498 "zone_management": false, 00:18:43.498 "zone_append": false, 00:18:43.498 "compare": false, 00:18:43.498 "compare_and_write": false, 00:18:43.498 "abort": false, 00:18:43.498 "seek_hole": true, 00:18:43.498 "seek_data": true, 00:18:43.498 "copy": false, 00:18:43.498 "nvme_iov_md": false 00:18:43.498 }, 00:18:43.498 "driver_specific": { 00:18:43.498 "lvol": { 00:18:43.498 "lvol_store_uuid": "2300869b-4d61-4517-b49c-5f6c70ada65d", 00:18:43.498 "base_bdev": "nvme0n1", 00:18:43.498 "thin_provision": true, 00:18:43.498 "num_allocated_clusters": 0, 00:18:43.498 "snapshot": false, 00:18:43.498 "clone": false, 00:18:43.498 "esnap_clone": false 00:18:43.498 } 00:18:43.498 } 00:18:43.498 } 00:18:43.498 ]' 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:43.498 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:43.498 18:32:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:43.498 18:32:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:43.759 18:32:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:43.759 18:32:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:43.759 18:32:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:43.759 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:43.759 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:43.759 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:43.759 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:43.759 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f092cb59-c431-42d7-b3e7-aa53de175b62 00:18:44.020 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:44.020 { 00:18:44.020 "name": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:44.020 "aliases": [ 00:18:44.020 "lvs/nvme0n1p0" 00:18:44.020 ], 00:18:44.020 "product_name": "Logical Volume", 00:18:44.020 "block_size": 4096, 00:18:44.020 "num_blocks": 26476544, 00:18:44.020 "uuid": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:44.020 "assigned_rate_limits": { 00:18:44.020 "rw_ios_per_sec": 0, 00:18:44.020 "rw_mbytes_per_sec": 0, 00:18:44.020 "r_mbytes_per_sec": 0, 00:18:44.020 "w_mbytes_per_sec": 0 00:18:44.020 }, 00:18:44.020 "claimed": false, 00:18:44.020 "zoned": false, 00:18:44.020 "supported_io_types": { 00:18:44.020 "read": true, 00:18:44.020 "write": true, 00:18:44.020 "unmap": true, 00:18:44.020 "flush": false, 00:18:44.020 "reset": true, 00:18:44.020 "nvme_admin": false, 00:18:44.020 "nvme_io": false, 00:18:44.020 "nvme_io_md": false, 00:18:44.020 "write_zeroes": true, 00:18:44.020 "zcopy": false, 00:18:44.020 "get_zone_info": false, 00:18:44.020 "zone_management": false, 00:18:44.020 "zone_append": false, 00:18:44.020 "compare": false, 00:18:44.020 "compare_and_write": false, 00:18:44.020 "abort": false, 00:18:44.020 "seek_hole": true, 00:18:44.020 "seek_data": true, 00:18:44.020 "copy": false, 00:18:44.020 "nvme_iov_md": false 00:18:44.020 }, 00:18:44.020 "driver_specific": { 00:18:44.020 "lvol": { 00:18:44.020 "lvol_store_uuid": "2300869b-4d61-4517-b49c-5f6c70ada65d", 00:18:44.020 "base_bdev": "nvme0n1", 00:18:44.020 "thin_provision": true, 00:18:44.020 "num_allocated_clusters": 0, 00:18:44.020 "snapshot": false, 00:18:44.020 "clone": false, 00:18:44.020 "esnap_clone": false 00:18:44.020 } 00:18:44.020 } 00:18:44.020 } 00:18:44.020 ]' 00:18:44.020 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:44.020 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:44.020 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:44.020 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:44.021 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:44.021 18:32:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:44.021 18:32:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:44.021 18:32:03 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f092cb59-c431-42d7-b3e7-aa53de175b62 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:44.283 [2024-11-29 18:32:03.987307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.987437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:44.283 [2024-11-29 18:32:03.987472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.283 [2024-11-29 18:32:03.987481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.989346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.989381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.283 [2024-11-29 18:32:03.989389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:18:44.283 [2024-11-29 18:32:03.989405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.989499] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:44.283 [2024-11-29 18:32:03.989684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:44.283 [2024-11-29 18:32:03.989702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.989711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.283 [2024-11-29 18:32:03.989717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:44.283 [2024-11-29 18:32:03.989723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.989977] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:18:44.283 [2024-11-29 18:32:03.991037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.991064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:44.283 [2024-11-29 18:32:03.991074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:44.283 [2024-11-29 18:32:03.991083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.996187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.996220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.283 [2024-11-29 18:32:03.996231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:18:44.283 [2024-11-29 18:32:03.996238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.996348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.996359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.283 [2024-11-29 18:32:03.996368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:44.283 [2024-11-29 18:32:03.996374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.996413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.996420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:44.283 [2024-11-29 18:32:03.996427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:44.283 [2024-11-29 18:32:03.996440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.996480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:44.283 [2024-11-29 18:32:03.997747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.997846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.283 [2024-11-29 18:32:03.997858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:18:44.283 [2024-11-29 18:32:03.997865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.997911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.997919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:44.283 [2024-11-29 18:32:03.997925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.283 [2024-11-29 18:32:03.997934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.997972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:44.283 [2024-11-29 18:32:03.998082] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:44.283 [2024-11-29 18:32:03.998091] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:44.283 [2024-11-29 18:32:03.998101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:44.283 [2024-11-29 18:32:03.998109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998116] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:44.283 [2024-11-29 18:32:03.998129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:44.283 [2024-11-29 18:32:03.998134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:44.283 [2024-11-29 18:32:03.998142] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:44.283 [2024-11-29 18:32:03.998148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.998156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:44.283 [2024-11-29 18:32:03.998162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:18:44.283 [2024-11-29 18:32:03.998169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.998249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.283 [2024-11-29 18:32:03.998258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:44.283 [2024-11-29 18:32:03.998264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:44.283 [2024-11-29 18:32:03.998270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.283 [2024-11-29 18:32:03.998370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:44.283 [2024-11-29 18:32:03.998379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:44.283 [2024-11-29 18:32:03.998385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:44.283 [2024-11-29 18:32:03.998404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:44.283 [2024-11-29 18:32:03.998421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.283 [2024-11-29 18:32:03.998433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:44.283 [2024-11-29 18:32:03.998440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:44.283 [2024-11-29 18:32:03.998446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.283 [2024-11-29 18:32:03.998470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:44.283 [2024-11-29 18:32:03.998476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:44.283 [2024-11-29 18:32:03.998483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:44.283 [2024-11-29 18:32:03.998498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:44.283 [2024-11-29 18:32:03.998516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:44.283 [2024-11-29 18:32:03.998536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:44.283 [2024-11-29 18:32:03.998554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:44.283 [2024-11-29 18:32:03.998576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.283 [2024-11-29 18:32:03.998589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:44.283 [2024-11-29 18:32:03.998595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:44.283 [2024-11-29 18:32:03.998602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.283 [2024-11-29 18:32:03.998608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:44.283 [2024-11-29 18:32:03.998615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:44.284 [2024-11-29 18:32:03.998621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.284 [2024-11-29 18:32:03.998628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:44.284 [2024-11-29 18:32:03.998634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:44.284 [2024-11-29 18:32:03.998641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.284 [2024-11-29 18:32:03.998647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:44.284 [2024-11-29 18:32:03.998654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:44.284 [2024-11-29 18:32:03.998659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.284 [2024-11-29 18:32:03.998666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:44.284 [2024-11-29 18:32:03.998672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:44.284 [2024-11-29 18:32:03.998682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.284 [2024-11-29 18:32:03.998698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.284 [2024-11-29 18:32:03.998705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:44.284 [2024-11-29 18:32:03.998711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:44.284 [2024-11-29 18:32:03.998718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:44.284 [2024-11-29 18:32:03.998724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:44.284 [2024-11-29 18:32:03.998732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:44.284 [2024-11-29 18:32:03.998738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:44.284 [2024-11-29 18:32:03.998747] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:44.284 [2024-11-29 18:32:03.998755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:44.284 [2024-11-29 18:32:03.998770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:44.284 [2024-11-29 18:32:03.998778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:44.284 [2024-11-29 18:32:03.998785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:44.284 [2024-11-29 18:32:03.998792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:44.284 [2024-11-29 18:32:03.998798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:44.284 [2024-11-29 18:32:03.998807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:44.284 [2024-11-29 18:32:03.998813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:44.284 [2024-11-29 18:32:03.998820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:44.284 [2024-11-29 18:32:03.998826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:44.284 [2024-11-29 18:32:03.998857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:44.284 [2024-11-29 18:32:03.998864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:44.284 [2024-11-29 18:32:03.998880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:44.284 [2024-11-29 18:32:03.998886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:44.284 [2024-11-29 18:32:03.998891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:44.284 [2024-11-29 18:32:03.998898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.284 [2024-11-29 18:32:03.998903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:44.284 [2024-11-29 18:32:03.998911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:18:44.284 [2024-11-29 18:32:03.998917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.284 [2024-11-29 18:32:03.998987] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:44.284 [2024-11-29 18:32:03.998994] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:46.829 [2024-11-29 18:32:06.425174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.425232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:46.830 [2024-11-29 18:32:06.425251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2426.173 ms 00:18:46.830 [2024-11-29 18:32:06.425259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.433956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.433999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.830 [2024-11-29 18:32:06.434012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.592 ms 00:18:46.830 [2024-11-29 18:32:06.434020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.434168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.434180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.830 [2024-11-29 18:32:06.434193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:46.830 [2024-11-29 18:32:06.434200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.453006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.453056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.830 [2024-11-29 18:32:06.453074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.768 ms 00:18:46.830 [2024-11-29 18:32:06.453084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.453191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.453208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.830 [2024-11-29 18:32:06.453221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.830 [2024-11-29 18:32:06.453231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.453632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.453651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.830 [2024-11-29 18:32:06.453666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:18:46.830 [2024-11-29 18:32:06.453677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.453843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.453856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.830 [2024-11-29 18:32:06.453873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:46.830 [2024-11-29 18:32:06.453883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.459949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.459983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.830 [2024-11-29 18:32:06.459995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.023 ms 00:18:46.830 [2024-11-29 18:32:06.460002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.468265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:46.830 [2024-11-29 18:32:06.482970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.483150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.830 [2024-11-29 18:32:06.483166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.879 ms 00:18:46.830 [2024-11-29 18:32:06.483176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.539513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.539562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:46.830 [2024-11-29 18:32:06.539577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.248 ms 00:18:46.830 [2024-11-29 18:32:06.539590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.539792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.539806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.830 [2024-11-29 18:32:06.539815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:46.830 [2024-11-29 18:32:06.539825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.542956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.542995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:46.830 [2024-11-29 18:32:06.543005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:18:46.830 [2024-11-29 18:32:06.543017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.545514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.545664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:46.830 [2024-11-29 18:32:06.545680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:18:46.830 [2024-11-29 18:32:06.545689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.545989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.546000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.830 [2024-11-29 18:32:06.546020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:46.830 [2024-11-29 18:32:06.546031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.574124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.574173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:46.830 [2024-11-29 18:32:06.574183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:18:46.830 [2024-11-29 18:32:06.574194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.578367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.578405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:46.830 [2024-11-29 18:32:06.578415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:18:46.830 [2024-11-29 18:32:06.578424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.581365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.581511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:46.830 [2024-11-29 18:32:06.581526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:18:46.830 [2024-11-29 18:32:06.581535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.585000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.585114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.830 [2024-11-29 18:32:06.585128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.414 ms 00:18:46.830 [2024-11-29 18:32:06.585139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.585200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.585212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.830 [2024-11-29 18:32:06.585231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:46.830 [2024-11-29 18:32:06.585240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.585317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.830 [2024-11-29 18:32:06.585327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.830 [2024-11-29 18:32:06.585335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:46.830 [2024-11-29 18:32:06.585343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.830 [2024-11-29 18:32:06.586182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.830 [2024-11-29 18:32:06.587149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2598.595 ms, result 0 00:18:46.830 [2024-11-29 18:32:06.587943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.830 { 00:18:46.830 "name": "ftl0", 00:18:46.830 "uuid": "810cac8b-8d30-4fbe-8f84-c99f968afe2f" 00:18:46.830 } 00:18:46.830 18:32:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.830 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:47.089 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:47.089 [ 00:18:47.089 { 00:18:47.089 "name": "ftl0", 00:18:47.089 "aliases": [ 00:18:47.089 "810cac8b-8d30-4fbe-8f84-c99f968afe2f" 00:18:47.089 ], 00:18:47.089 "product_name": "FTL disk", 00:18:47.089 "block_size": 4096, 00:18:47.089 "num_blocks": 23592960, 00:18:47.089 "uuid": "810cac8b-8d30-4fbe-8f84-c99f968afe2f", 00:18:47.089 "assigned_rate_limits": { 00:18:47.089 "rw_ios_per_sec": 0, 00:18:47.089 "rw_mbytes_per_sec": 0, 00:18:47.089 "r_mbytes_per_sec": 0, 00:18:47.089 "w_mbytes_per_sec": 0 00:18:47.089 }, 00:18:47.089 "claimed": false, 00:18:47.089 "zoned": false, 00:18:47.089 "supported_io_types": { 00:18:47.089 "read": true, 00:18:47.089 "write": true, 00:18:47.089 "unmap": true, 00:18:47.089 "flush": true, 00:18:47.089 "reset": false, 00:18:47.089 "nvme_admin": false, 00:18:47.089 "nvme_io": false, 00:18:47.089 "nvme_io_md": false, 00:18:47.089 "write_zeroes": true, 00:18:47.089 "zcopy": false, 00:18:47.089 "get_zone_info": false, 00:18:47.089 "zone_management": false, 00:18:47.089 "zone_append": false, 00:18:47.090 "compare": false, 00:18:47.090 "compare_and_write": false, 00:18:47.090 "abort": false, 00:18:47.090 "seek_hole": false, 00:18:47.090 "seek_data": false, 00:18:47.090 "copy": false, 00:18:47.090 "nvme_iov_md": false 00:18:47.090 }, 00:18:47.090 "driver_specific": { 00:18:47.090 "ftl": { 00:18:47.090 "base_bdev": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:47.090 "cache": "nvc0n1p0" 00:18:47.090 } 00:18:47.090 } 00:18:47.090 } 00:18:47.090 ] 00:18:47.348 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:47.348 18:32:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:47.348 18:32:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:47.348 18:32:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:47.348 18:32:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:47.606 18:32:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:47.606 { 00:18:47.606 "name": "ftl0", 00:18:47.606 "aliases": [ 00:18:47.606 "810cac8b-8d30-4fbe-8f84-c99f968afe2f" 00:18:47.606 ], 00:18:47.606 "product_name": "FTL disk", 00:18:47.606 "block_size": 4096, 00:18:47.606 "num_blocks": 23592960, 00:18:47.606 "uuid": "810cac8b-8d30-4fbe-8f84-c99f968afe2f", 00:18:47.606 "assigned_rate_limits": { 00:18:47.606 "rw_ios_per_sec": 0, 00:18:47.606 "rw_mbytes_per_sec": 0, 00:18:47.606 "r_mbytes_per_sec": 0, 00:18:47.606 "w_mbytes_per_sec": 0 00:18:47.606 }, 00:18:47.606 "claimed": false, 00:18:47.606 "zoned": false, 00:18:47.606 "supported_io_types": { 00:18:47.606 "read": true, 00:18:47.606 "write": true, 00:18:47.606 "unmap": true, 00:18:47.606 "flush": true, 00:18:47.606 "reset": false, 00:18:47.606 "nvme_admin": false, 00:18:47.606 "nvme_io": false, 00:18:47.606 "nvme_io_md": false, 00:18:47.606 "write_zeroes": true, 00:18:47.606 "zcopy": false, 00:18:47.606 "get_zone_info": false, 00:18:47.606 "zone_management": false, 00:18:47.606 "zone_append": false, 00:18:47.606 "compare": false, 00:18:47.606 "compare_and_write": false, 00:18:47.606 "abort": false, 00:18:47.606 "seek_hole": false, 00:18:47.606 "seek_data": false, 00:18:47.606 "copy": false, 00:18:47.606 "nvme_iov_md": false 00:18:47.606 }, 00:18:47.606 "driver_specific": { 00:18:47.606 "ftl": { 00:18:47.606 "base_bdev": "f092cb59-c431-42d7-b3e7-aa53de175b62", 00:18:47.606 "cache": "nvc0n1p0" 00:18:47.606 } 00:18:47.606 } 00:18:47.606 } 00:18:47.606 ]' 00:18:47.606 18:32:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:47.606 18:32:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:47.606 18:32:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:47.866 [2024-11-29 18:32:07.620709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.620750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.866 [2024-11-29 18:32:07.620774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.866 [2024-11-29 18:32:07.620782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.620822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.866 [2024-11-29 18:32:07.621268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.621285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.866 [2024-11-29 18:32:07.621296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:18:47.866 [2024-11-29 18:32:07.621305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.621945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.621963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.866 [2024-11-29 18:32:07.621971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:47.866 [2024-11-29 18:32:07.621980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.625638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.625662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.866 [2024-11-29 18:32:07.625672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:18:47.866 [2024-11-29 18:32:07.625684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.632607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.632638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.866 [2024-11-29 18:32:07.632648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:18:47.866 [2024-11-29 18:32:07.632659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.634597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.634632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.866 [2024-11-29 18:32:07.634641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.861 ms 00:18:47.866 [2024-11-29 18:32:07.634650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.639035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.639071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.866 [2024-11-29 18:32:07.639083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:18:47.866 [2024-11-29 18:32:07.639093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.639297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.639312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.866 [2024-11-29 18:32:07.639320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:47.866 [2024-11-29 18:32:07.639330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.641095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.641129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.866 [2024-11-29 18:32:07.641139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:18:47.866 [2024-11-29 18:32:07.641149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.642629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.642758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.866 [2024-11-29 18:32:07.642772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:18:47.866 [2024-11-29 18:32:07.642781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.644025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.644061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.866 [2024-11-29 18:32:07.644070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:18:47.866 [2024-11-29 18:32:07.644079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.645165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-11-29 18:32:07.645202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.866 [2024-11-29 18:32:07.645211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:47.866 [2024-11-29 18:32:07.645220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-11-29 18:32:07.645263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.866 [2024-11-29 18:32:07.645277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.866 [2024-11-29 18:32:07.645609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.645994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.867 [2024-11-29 18:32:07.646160] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.867 [2024-11-29 18:32:07.646168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:18:47.867 [2024-11-29 18:32:07.646180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.867 [2024-11-29 18:32:07.646187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.867 [2024-11-29 18:32:07.646195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.867 [2024-11-29 18:32:07.646213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.867 [2024-11-29 18:32:07.646222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.867 [2024-11-29 18:32:07.646229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.867 [2024-11-29 18:32:07.646238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.867 [2024-11-29 18:32:07.646244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.867 [2024-11-29 18:32:07.646252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.867 [2024-11-29 18:32:07.646259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.867 [2024-11-29 18:32:07.646268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.867 [2024-11-29 18:32:07.646276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:18:47.867 [2024-11-29 18:32:07.646287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.867 [2024-11-29 18:32:07.648028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.867 [2024-11-29 18:32:07.648118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.867 [2024-11-29 18:32:07.648169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:18:47.867 [2024-11-29 18:32:07.648205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.867 [2024-11-29 18:32:07.648306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.867 [2024-11-29 18:32:07.648330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.867 [2024-11-29 18:32:07.648382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:47.867 [2024-11-29 18:32:07.648424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.867 [2024-11-29 18:32:07.653931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.867 [2024-11-29 18:32:07.654042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.867 [2024-11-29 18:32:07.654099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.867 [2024-11-29 18:32:07.654149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.867 [2024-11-29 18:32:07.654259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.867 [2024-11-29 18:32:07.654286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.867 [2024-11-29 18:32:07.654333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.867 [2024-11-29 18:32:07.654382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.867 [2024-11-29 18:32:07.654472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.867 [2024-11-29 18:32:07.654534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.868 [2024-11-29 18:32:07.654557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.654600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.654650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.654709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.868 [2024-11-29 18:32:07.654732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.654754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.664252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.664386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.868 [2024-11-29 18:32:07.664437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.664479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.672466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.672598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.868 [2024-11-29 18:32:07.672655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.672683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.672755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.672780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.868 [2024-11-29 18:32:07.672845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.672870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.672931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.672941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.868 [2024-11-29 18:32:07.672948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.672958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.673042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.673056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.868 [2024-11-29 18:32:07.673064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.673073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.673122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.673132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.868 [2024-11-29 18:32:07.673140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.673150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.673208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.673220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.868 [2024-11-29 18:32:07.673227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.673244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.673296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.868 [2024-11-29 18:32:07.673307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.868 [2024-11-29 18:32:07.673315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.868 [2024-11-29 18:32:07.673324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.868 [2024-11-29 18:32:07.673516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.798 ms, result 0 00:18:47.868 true 00:18:47.868 18:32:07 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87630 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87630 ']' 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87630 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87630 00:18:47.868 killing process with pid 87630 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87630' 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87630 00:18:47.868 18:32:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87630 00:18:53.174 18:32:12 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:53.743 65536+0 records in 00:18:53.743 65536+0 records out 00:18:53.743 268435456 bytes (268 MB, 256 MiB) copied, 0.818386 s, 328 MB/s 00:18:53.743 18:32:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:54.003 [2024-11-29 18:32:13.655501] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:18:54.003 [2024-11-29 18:32:13.655632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87784 ] 00:18:54.003 [2024-11-29 18:32:13.812763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.003 [2024-11-29 18:32:13.840951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.264 [2024-11-29 18:32:13.931733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.264 [2024-11-29 18:32:13.931802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.264 [2024-11-29 18:32:14.090450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.090701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.264 [2024-11-29 18:32:14.090735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:54.264 [2024-11-29 18:32:14.090745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.093349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.093407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.264 [2024-11-29 18:32:14.093418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:18:54.264 [2024-11-29 18:32:14.093426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.093559] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.264 [2024-11-29 18:32:14.093837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.264 [2024-11-29 18:32:14.093859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.093868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.264 [2024-11-29 18:32:14.093879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:54.264 [2024-11-29 18:32:14.093888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.096158] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:54.264 [2024-11-29 18:32:14.098992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.099030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:54.264 [2024-11-29 18:32:14.099045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:18:54.264 [2024-11-29 18:32:14.099053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.099124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.099134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:54.264 [2024-11-29 18:32:14.099143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:54.264 [2024-11-29 18:32:14.099150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.104122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.104154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.264 [2024-11-29 18:32:14.104163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:18:54.264 [2024-11-29 18:32:14.104171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.104281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.104295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.264 [2024-11-29 18:32:14.104303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:54.264 [2024-11-29 18:32:14.104317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.104341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.104352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.264 [2024-11-29 18:32:14.104363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:54.264 [2024-11-29 18:32:14.104369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.104392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:54.264 [2024-11-29 18:32:14.105782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.105909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.264 [2024-11-29 18:32:14.105928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:18:54.264 [2024-11-29 18:32:14.105944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.106000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.106013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.264 [2024-11-29 18:32:14.106024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:54.264 [2024-11-29 18:32:14.106041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.106065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:54.264 [2024-11-29 18:32:14.106092] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:54.264 [2024-11-29 18:32:14.106130] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:54.264 [2024-11-29 18:32:14.106147] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:54.264 [2024-11-29 18:32:14.106248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.264 [2024-11-29 18:32:14.106261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.264 [2024-11-29 18:32:14.106271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.264 [2024-11-29 18:32:14.106281] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.264 [2024-11-29 18:32:14.106289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.264 [2024-11-29 18:32:14.106297] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:54.264 [2024-11-29 18:32:14.106307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.264 [2024-11-29 18:32:14.106314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.264 [2024-11-29 18:32:14.106324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.264 [2024-11-29 18:32:14.106335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.106341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.264 [2024-11-29 18:32:14.106349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:54.264 [2024-11-29 18:32:14.106356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.106442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.264 [2024-11-29 18:32:14.106490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.264 [2024-11-29 18:32:14.106500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:54.264 [2024-11-29 18:32:14.106507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.264 [2024-11-29 18:32:14.106608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.264 [2024-11-29 18:32:14.106623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.264 [2024-11-29 18:32:14.106632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.264 [2024-11-29 18:32:14.106644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.264 [2024-11-29 18:32:14.106653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.264 [2024-11-29 18:32:14.106660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.264 [2024-11-29 18:32:14.106668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:54.264 [2024-11-29 18:32:14.106678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.264 [2024-11-29 18:32:14.106686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.265 [2024-11-29 18:32:14.106700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.265 [2024-11-29 18:32:14.106708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:54.265 [2024-11-29 18:32:14.106715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.265 [2024-11-29 18:32:14.106722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.265 [2024-11-29 18:32:14.106730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:54.265 [2024-11-29 18:32:14.106739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.265 [2024-11-29 18:32:14.106755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.265 [2024-11-29 18:32:14.106778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.265 [2024-11-29 18:32:14.106805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.265 [2024-11-29 18:32:14.106828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.265 [2024-11-29 18:32:14.106849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.265 [2024-11-29 18:32:14.106871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.265 [2024-11-29 18:32:14.106885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.265 [2024-11-29 18:32:14.106892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:54.265 [2024-11-29 18:32:14.106900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.265 [2024-11-29 18:32:14.106907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.265 [2024-11-29 18:32:14.106914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:54.265 [2024-11-29 18:32:14.106923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.265 [2024-11-29 18:32:14.106937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:54.265 [2024-11-29 18:32:14.106944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106951] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.265 [2024-11-29 18:32:14.106960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.265 [2024-11-29 18:32:14.106966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.265 [2024-11-29 18:32:14.106973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.265 [2024-11-29 18:32:14.106982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.265 [2024-11-29 18:32:14.106989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.265 [2024-11-29 18:32:14.106995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.265 [2024-11-29 18:32:14.107002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.265 [2024-11-29 18:32:14.107008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.265 [2024-11-29 18:32:14.107015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.265 [2024-11-29 18:32:14.107023] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.265 [2024-11-29 18:32:14.107032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:54.265 [2024-11-29 18:32:14.107049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:54.265 [2024-11-29 18:32:14.107056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:54.265 [2024-11-29 18:32:14.107063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:54.265 [2024-11-29 18:32:14.107070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:54.265 [2024-11-29 18:32:14.107077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:54.265 [2024-11-29 18:32:14.107083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:54.265 [2024-11-29 18:32:14.107089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:54.265 [2024-11-29 18:32:14.107096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:54.265 [2024-11-29 18:32:14.107103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:54.265 [2024-11-29 18:32:14.107139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.265 [2024-11-29 18:32:14.107148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.265 [2024-11-29 18:32:14.107165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.265 [2024-11-29 18:32:14.107172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.265 [2024-11-29 18:32:14.107179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.265 [2024-11-29 18:32:14.107186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.107193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.265 [2024-11-29 18:32:14.107200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:18:54.265 [2024-11-29 18:32:14.107207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.116274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.116313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.265 [2024-11-29 18:32:14.116322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.017 ms 00:18:54.265 [2024-11-29 18:32:14.116330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.116446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.116473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:54.265 [2024-11-29 18:32:14.116482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:54.265 [2024-11-29 18:32:14.116494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.132809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.132853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.265 [2024-11-29 18:32:14.132867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.291 ms 00:18:54.265 [2024-11-29 18:32:14.132875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.132959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.132972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.265 [2024-11-29 18:32:14.132982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:54.265 [2024-11-29 18:32:14.132990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.133322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.133344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.265 [2024-11-29 18:32:14.133354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:54.265 [2024-11-29 18:32:14.133362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.133523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.133537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.265 [2024-11-29 18:32:14.133547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:54.265 [2024-11-29 18:32:14.133560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.139247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.265 [2024-11-29 18:32:14.139377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.265 [2024-11-29 18:32:14.139400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.664 ms 00:18:54.265 [2024-11-29 18:32:14.139408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.265 [2024-11-29 18:32:14.142151] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:54.265 [2024-11-29 18:32:14.142188] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:54.266 [2024-11-29 18:32:14.142200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.266 [2024-11-29 18:32:14.142207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:54.266 [2024-11-29 18:32:14.142215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:18:54.266 [2024-11-29 18:32:14.142222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.266 [2024-11-29 18:32:14.157074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.266 [2024-11-29 18:32:14.157193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.266 [2024-11-29 18:32:14.157245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.809 ms 00:18:54.266 [2024-11-29 18:32:14.157268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.266 [2024-11-29 18:32:14.159776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.266 [2024-11-29 18:32:14.159906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.266 [2024-11-29 18:32:14.159922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:18:54.266 [2024-11-29 18:32:14.159930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.266 [2024-11-29 18:32:14.161961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.266 [2024-11-29 18:32:14.162089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.266 [2024-11-29 18:32:14.162105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:18:54.266 [2024-11-29 18:32:14.162112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.266 [2024-11-29 18:32:14.163729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.266 [2024-11-29 18:32:14.163816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.266 [2024-11-29 18:32:14.163850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:18:54.266 [2024-11-29 18:32:14.163873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.186749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.186798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.526 [2024-11-29 18:32:14.186817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.817 ms 00:18:54.526 [2024-11-29 18:32:14.186825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.194373] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:54.526 [2024-11-29 18:32:14.209279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.209446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.526 [2024-11-29 18:32:14.209475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.364 ms 00:18:54.526 [2024-11-29 18:32:14.209485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.209569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.209580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.526 [2024-11-29 18:32:14.209589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:54.526 [2024-11-29 18:32:14.209599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.209643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.209652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.526 [2024-11-29 18:32:14.209659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:54.526 [2024-11-29 18:32:14.209667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.209696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.209704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.526 [2024-11-29 18:32:14.209712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.526 [2024-11-29 18:32:14.209720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.209751] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.526 [2024-11-29 18:32:14.209761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.209768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.526 [2024-11-29 18:32:14.209780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:54.526 [2024-11-29 18:32:14.209787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.526 [2024-11-29 18:32:14.214176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.526 [2024-11-29 18:32:14.214211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.527 [2024-11-29 18:32:14.214221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:18:54.527 [2024-11-29 18:32:14.214234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.527 [2024-11-29 18:32:14.214311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.527 [2024-11-29 18:32:14.214320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.527 [2024-11-29 18:32:14.214329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:54.527 [2024-11-29 18:32:14.214339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.527 [2024-11-29 18:32:14.215113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.527 [2024-11-29 18:32:14.216117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.418 ms, result 0 00:18:54.527 [2024-11-29 18:32:14.216756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:54.527 [2024-11-29 18:32:14.225897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.469  [2024-11-29T18:32:16.314Z] Copying: 17/256 [MB] (17 MBps) [2024-11-29T18:32:17.255Z] Copying: 37/256 [MB] (19 MBps) [2024-11-29T18:32:18.638Z] Copying: 77/256 [MB] (40 MBps) [2024-11-29T18:32:19.579Z] Copying: 105/256 [MB] (28 MBps) [2024-11-29T18:32:20.519Z] Copying: 120/256 [MB] (15 MBps) [2024-11-29T18:32:21.461Z] Copying: 149/256 [MB] (29 MBps) [2024-11-29T18:32:22.403Z] Copying: 173/256 [MB] (23 MBps) [2024-11-29T18:32:23.347Z] Copying: 194/256 [MB] (20 MBps) [2024-11-29T18:32:24.292Z] Copying: 210/256 [MB] (16 MBps) [2024-11-29T18:32:25.240Z] Copying: 231/256 [MB] (21 MBps) [2024-11-29T18:32:25.240Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-29 18:32:25.137513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.335 [2024-11-29 18:32:25.138512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.138534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:05.335 [2024-11-29 18:32:25.138543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:05.335 [2024-11-29 18:32:25.138550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.138566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:05.335 [2024-11-29 18:32:25.138926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.138939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:05.335 [2024-11-29 18:32:25.138946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:19:05.335 [2024-11-29 18:32:25.138953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.140660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.140686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:05.335 [2024-11-29 18:32:25.140694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:19:05.335 [2024-11-29 18:32:25.140703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.146670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.146695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:05.335 [2024-11-29 18:32:25.146703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:19:05.335 [2024-11-29 18:32:25.146709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.152129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.152230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:05.335 [2024-11-29 18:32:25.152250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.396 ms 00:19:05.335 [2024-11-29 18:32:25.152258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.153402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.153425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:05.335 [2024-11-29 18:32:25.153431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:19:05.335 [2024-11-29 18:32:25.153437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.157022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.157053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:05.335 [2024-11-29 18:32:25.157064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:19:05.335 [2024-11-29 18:32:25.157070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.157160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.157168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:05.335 [2024-11-29 18:32:25.157174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:05.335 [2024-11-29 18:32:25.157182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.159139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.159232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:05.335 [2024-11-29 18:32:25.159243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:19:05.335 [2024-11-29 18:32:25.159250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.160779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.160801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:05.335 [2024-11-29 18:32:25.160807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:19:05.335 [2024-11-29 18:32:25.160813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.161908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.161932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:05.335 [2024-11-29 18:32:25.161939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:19:05.335 [2024-11-29 18:32:25.161944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.162845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.335 [2024-11-29 18:32:25.162869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:05.335 [2024-11-29 18:32:25.162876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:19:05.335 [2024-11-29 18:32:25.162881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.335 [2024-11-29 18:32:25.162904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:05.335 [2024-11-29 18:32:25.162915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.162997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:05.335 [2024-11-29 18:32:25.163101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:05.336 [2024-11-29 18:32:25.163513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:05.336 [2024-11-29 18:32:25.163519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:05.336 [2024-11-29 18:32:25.163525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:05.336 [2024-11-29 18:32:25.163530] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:05.336 [2024-11-29 18:32:25.163536] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:05.336 [2024-11-29 18:32:25.163541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:05.336 [2024-11-29 18:32:25.163546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:05.336 [2024-11-29 18:32:25.163552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:05.336 [2024-11-29 18:32:25.163562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:05.336 [2024-11-29 18:32:25.163567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:05.336 [2024-11-29 18:32:25.163571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:05.336 [2024-11-29 18:32:25.163577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.336 [2024-11-29 18:32:25.163583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:05.336 [2024-11-29 18:32:25.163589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:19:05.336 [2024-11-29 18:32:25.163595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.336 [2024-11-29 18:32:25.164804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.336 [2024-11-29 18:32:25.164820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:05.336 [2024-11-29 18:32:25.164827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:19:05.336 [2024-11-29 18:32:25.164833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.336 [2024-11-29 18:32:25.164902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.336 [2024-11-29 18:32:25.164912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:05.336 [2024-11-29 18:32:25.164919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:05.336 [2024-11-29 18:32:25.164924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.336 [2024-11-29 18:32:25.169187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.336 [2024-11-29 18:32:25.169266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.336 [2024-11-29 18:32:25.169307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.336 [2024-11-29 18:32:25.169328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.169380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.169401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.337 [2024-11-29 18:32:25.169471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.169494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.169535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.169553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.337 [2024-11-29 18:32:25.169567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.169605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.169632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.169647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.337 [2024-11-29 18:32:25.169688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.169708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.176997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.177027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.337 [2024-11-29 18:32:25.177040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.177046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.182952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.182983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.337 [2024-11-29 18:32:25.182991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.182998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.337 [2024-11-29 18:32:25.183030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.337 [2024-11-29 18:32:25.183074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.337 [2024-11-29 18:32:25.183140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:05.337 [2024-11-29 18:32:25.183183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.337 [2024-11-29 18:32:25.183232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.337 [2024-11-29 18:32:25.183281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.337 [2024-11-29 18:32:25.183287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.337 [2024-11-29 18:32:25.183294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.337 [2024-11-29 18:32:25.183401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.866 ms, result 0 00:19:05.909 00:19:05.909 00:19:05.909 18:32:25 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87917 00:19:05.909 18:32:25 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87917 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87917 ']' 00:19:05.909 18:32:25 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.909 18:32:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:05.909 [2024-11-29 18:32:25.808440] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:05.909 [2024-11-29 18:32:25.808572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87917 ] 00:19:06.170 [2024-11-29 18:32:25.963478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.170 [2024-11-29 18:32:25.981169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.742 18:32:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.742 18:32:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:06.742 18:32:26 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:07.004 [2024-11-29 18:32:26.837562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.004 [2024-11-29 18:32:26.837610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.268 [2024-11-29 18:32:27.000480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.000604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.268 [2024-11-29 18:32:27.000620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.268 [2024-11-29 18:32:27.000628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.002351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.002382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.268 [2024-11-29 18:32:27.002389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:19:07.268 [2024-11-29 18:32:27.002396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.002449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.268 [2024-11-29 18:32:27.002635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.268 [2024-11-29 18:32:27.002646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.002654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.268 [2024-11-29 18:32:27.002661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:19:07.268 [2024-11-29 18:32:27.002668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.003701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:07.268 [2024-11-29 18:32:27.005771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.005797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:07.268 [2024-11-29 18:32:27.005807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:19:07.268 [2024-11-29 18:32:27.005812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.005857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.005864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:07.268 [2024-11-29 18:32:27.005873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:07.268 [2024-11-29 18:32:27.005879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.010063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.010093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.268 [2024-11-29 18:32:27.010104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:19:07.268 [2024-11-29 18:32:27.010110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.010185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.010192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.268 [2024-11-29 18:32:27.010205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:07.268 [2024-11-29 18:32:27.010211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.010233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.010240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.268 [2024-11-29 18:32:27.010247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:07.268 [2024-11-29 18:32:27.010253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.010271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:07.268 [2024-11-29 18:32:27.011408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.011427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.268 [2024-11-29 18:32:27.011436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:19:07.268 [2024-11-29 18:32:27.011443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.011486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.011494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.268 [2024-11-29 18:32:27.011501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:07.268 [2024-11-29 18:32:27.011508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.011522] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.268 [2024-11-29 18:32:27.011536] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:07.268 [2024-11-29 18:32:27.011567] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.268 [2024-11-29 18:32:27.011581] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:07.268 [2024-11-29 18:32:27.011661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:07.268 [2024-11-29 18:32:27.011670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.268 [2024-11-29 18:32:27.011678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:07.268 [2024-11-29 18:32:27.011688] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.268 [2024-11-29 18:32:27.011694] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.268 [2024-11-29 18:32:27.011704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:07.268 [2024-11-29 18:32:27.011709] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.268 [2024-11-29 18:32:27.011718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:07.268 [2024-11-29 18:32:27.011723] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:07.268 [2024-11-29 18:32:27.011730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.011736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.268 [2024-11-29 18:32:27.011743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:07.268 [2024-11-29 18:32:27.011748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.011815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.268 [2024-11-29 18:32:27.011825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.268 [2024-11-29 18:32:27.011835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:07.268 [2024-11-29 18:32:27.011840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.268 [2024-11-29 18:32:27.011918] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.268 [2024-11-29 18:32:27.011929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.268 [2024-11-29 18:32:27.011936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.268 [2024-11-29 18:32:27.011942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.268 [2024-11-29 18:32:27.011952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.268 [2024-11-29 18:32:27.011962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.268 [2024-11-29 18:32:27.011969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:07.268 [2024-11-29 18:32:27.011975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.268 [2024-11-29 18:32:27.011981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:07.268 [2024-11-29 18:32:27.011986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.269 [2024-11-29 18:32:27.011992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.269 [2024-11-29 18:32:27.011998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:07.269 [2024-11-29 18:32:27.012004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.269 [2024-11-29 18:32:27.012009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.269 [2024-11-29 18:32:27.012016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:07.269 [2024-11-29 18:32:27.012021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.269 [2024-11-29 18:32:27.012032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.269 [2024-11-29 18:32:27.012050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.269 [2024-11-29 18:32:27.012070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.269 [2024-11-29 18:32:27.012089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.269 [2024-11-29 18:32:27.012107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.269 [2024-11-29 18:32:27.012127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.269 [2024-11-29 18:32:27.012139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.269 [2024-11-29 18:32:27.012145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:07.269 [2024-11-29 18:32:27.012153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.269 [2024-11-29 18:32:27.012160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:07.269 [2024-11-29 18:32:27.012167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:07.269 [2024-11-29 18:32:27.012174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:07.269 [2024-11-29 18:32:27.012187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:07.269 [2024-11-29 18:32:27.012194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012200] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.269 [2024-11-29 18:32:27.012208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.269 [2024-11-29 18:32:27.012214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.269 [2024-11-29 18:32:27.012228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.269 [2024-11-29 18:32:27.012235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.269 [2024-11-29 18:32:27.012241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.269 [2024-11-29 18:32:27.012248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.269 [2024-11-29 18:32:27.012254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.269 [2024-11-29 18:32:27.012262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.269 [2024-11-29 18:32:27.012268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.269 [2024-11-29 18:32:27.012277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:07.269 [2024-11-29 18:32:27.012297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:07.269 [2024-11-29 18:32:27.012303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:07.269 [2024-11-29 18:32:27.012310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:07.269 [2024-11-29 18:32:27.012317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:07.269 [2024-11-29 18:32:27.012324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:07.269 [2024-11-29 18:32:27.012330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:07.269 [2024-11-29 18:32:27.012337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:07.269 [2024-11-29 18:32:27.012343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:07.269 [2024-11-29 18:32:27.012351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:07.269 [2024-11-29 18:32:27.012386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.269 [2024-11-29 18:32:27.012394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.269 [2024-11-29 18:32:27.012408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.269 [2024-11-29 18:32:27.012415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.269 [2024-11-29 18:32:27.012422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.269 [2024-11-29 18:32:27.012428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.012436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.269 [2024-11-29 18:32:27.012442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:19:07.269 [2024-11-29 18:32:27.012448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.020189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.020279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.269 [2024-11-29 18:32:27.020319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.681 ms 00:19:07.269 [2024-11-29 18:32:27.020338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.020443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.020556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.269 [2024-11-29 18:32:27.020578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:07.269 [2024-11-29 18:32:27.020593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.027854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.027952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.269 [2024-11-29 18:32:27.027996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:19:07.269 [2024-11-29 18:32:27.028017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.028061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.028080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.269 [2024-11-29 18:32:27.028122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.269 [2024-11-29 18:32:27.028140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.028420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.028472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.269 [2024-11-29 18:32:27.028490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:07.269 [2024-11-29 18:32:27.028542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.028656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.028676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.269 [2024-11-29 18:32:27.028691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:07.269 [2024-11-29 18:32:27.028753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.033327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.269 [2024-11-29 18:32:27.033410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.269 [2024-11-29 18:32:27.033473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:19:07.269 [2024-11-29 18:32:27.033522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.269 [2024-11-29 18:32:27.046447] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:07.270 [2024-11-29 18:32:27.046596] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:07.270 [2024-11-29 18:32:27.046658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.046928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:07.270 [2024-11-29 18:32:27.046973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:19:07.270 [2024-11-29 18:32:27.047037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.063196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.063289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:07.270 [2024-11-29 18:32:27.063330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.099 ms 00:19:07.270 [2024-11-29 18:32:27.063352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.065114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.065219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:07.270 [2024-11-29 18:32:27.065265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:19:07.270 [2024-11-29 18:32:27.065285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.066608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.066695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:07.270 [2024-11-29 18:32:27.066735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:19:07.270 [2024-11-29 18:32:27.066753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.067000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.067060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.270 [2024-11-29 18:32:27.067114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:19:07.270 [2024-11-29 18:32:27.067133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.080463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.080570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:07.270 [2024-11-29 18:32:27.080613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.303 ms 00:19:07.270 [2024-11-29 18:32:27.080635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.086339] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:07.270 [2024-11-29 18:32:27.097492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.097584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.270 [2024-11-29 18:32:27.097623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.806 ms 00:19:07.270 [2024-11-29 18:32:27.097640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.097729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.097751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:07.270 [2024-11-29 18:32:27.097767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:07.270 [2024-11-29 18:32:27.097784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.097837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.097853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.270 [2024-11-29 18:32:27.097872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:07.270 [2024-11-29 18:32:27.097926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.097957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.097974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.270 [2024-11-29 18:32:27.097994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.270 [2024-11-29 18:32:27.098008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.098040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:07.270 [2024-11-29 18:32:27.098056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.098293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:07.270 [2024-11-29 18:32:27.098308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:07.270 [2024-11-29 18:32:27.098326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.101311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.101397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.270 [2024-11-29 18:32:27.101438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:19:07.270 [2024-11-29 18:32:27.101467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.101531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.270 [2024-11-29 18:32:27.101622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.270 [2024-11-29 18:32:27.101629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:07.270 [2024-11-29 18:32:27.101636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.270 [2024-11-29 18:32:27.102287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.270 [2024-11-29 18:32:27.103042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.610 ms, result 0 00:19:07.270 [2024-11-29 18:32:27.103918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:07.270 Some configs were skipped because the RPC state that can call them passed over. 00:19:07.270 18:32:27 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:07.531 [2024-11-29 18:32:27.327482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.531 [2024-11-29 18:32:27.327577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:07.531 [2024-11-29 18:32:27.327620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:19:07.531 [2024-11-29 18:32:27.327637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.531 [2024-11-29 18:32:27.327675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.580 ms, result 0 00:19:07.531 true 00:19:07.531 18:32:27 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:07.792 [2024-11-29 18:32:27.531520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.531615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:07.792 [2024-11-29 18:32:27.531653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:19:07.792 [2024-11-29 18:32:27.531672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.531709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.440 ms, result 0 00:19:07.792 true 00:19:07.792 18:32:27 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87917 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87917 ']' 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87917 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87917 00:19:07.792 killing process with pid 87917 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87917' 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87917 00:19:07.792 18:32:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87917 00:19:07.792 [2024-11-29 18:32:27.653732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.653777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:07.792 [2024-11-29 18:32:27.653788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.792 [2024-11-29 18:32:27.653798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.653817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:07.792 [2024-11-29 18:32:27.654210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.654228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:07.792 [2024-11-29 18:32:27.654235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:07.792 [2024-11-29 18:32:27.654242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.654652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.654684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:07.792 [2024-11-29 18:32:27.654703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:19:07.792 [2024-11-29 18:32:27.654720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.658244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.658327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:07.792 [2024-11-29 18:32:27.658366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:19:07.792 [2024-11-29 18:32:27.658389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.663731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.663841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:07.792 [2024-11-29 18:32:27.663886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.104 ms 00:19:07.792 [2024-11-29 18:32:27.663908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.792 [2024-11-29 18:32:27.666021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.792 [2024-11-29 18:32:27.666114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:07.792 [2024-11-29 18:32:27.666155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:19:07.793 [2024-11-29 18:32:27.666173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.669894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.669977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:07.793 [2024-11-29 18:32:27.670015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:19:07.793 [2024-11-29 18:32:27.670033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.670149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.670170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:07.793 [2024-11-29 18:32:27.670186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:07.793 [2024-11-29 18:32:27.670201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.672682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.672760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:07.793 [2024-11-29 18:32:27.672797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.459 ms 00:19:07.793 [2024-11-29 18:32:27.672818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.674859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.674940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:07.793 [2024-11-29 18:32:27.674950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:19:07.793 [2024-11-29 18:32:27.674957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.676230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.676313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:07.793 [2024-11-29 18:32:27.676323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:19:07.793 [2024-11-29 18:32:27.676330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.677972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.793 [2024-11-29 18:32:27.678000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:07.793 [2024-11-29 18:32:27.678007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:19:07.793 [2024-11-29 18:32:27.678013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.793 [2024-11-29 18:32:27.678038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:07.793 [2024-11-29 18:32:27.678050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:07.793 [2024-11-29 18:32:27.678410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:07.794 [2024-11-29 18:32:27.678762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:07.794 [2024-11-29 18:32:27.678768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:07.794 [2024-11-29 18:32:27.678777] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:07.794 [2024-11-29 18:32:27.678783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:07.794 [2024-11-29 18:32:27.678790] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:07.794 [2024-11-29 18:32:27.678795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:07.794 [2024-11-29 18:32:27.678803] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:07.794 [2024-11-29 18:32:27.678810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:07.794 [2024-11-29 18:32:27.678816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:07.794 [2024-11-29 18:32:27.678821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:07.794 [2024-11-29 18:32:27.678828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:07.794 [2024-11-29 18:32:27.678833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.794 [2024-11-29 18:32:27.678840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:07.794 [2024-11-29 18:32:27.678846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:19:07.794 [2024-11-29 18:32:27.678858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.794 [2024-11-29 18:32:27.680081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.794 [2024-11-29 18:32:27.680101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:07.794 [2024-11-29 18:32:27.680108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:19:07.794 [2024-11-29 18:32:27.680116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.794 [2024-11-29 18:32:27.680188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.794 [2024-11-29 18:32:27.680197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:07.794 [2024-11-29 18:32:27.680203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:07.794 [2024-11-29 18:32:27.680212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.794 [2024-11-29 18:32:27.684643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.794 [2024-11-29 18:32:27.684674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.794 [2024-11-29 18:32:27.684681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.794 [2024-11-29 18:32:27.684688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.794 [2024-11-29 18:32:27.684747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.794 [2024-11-29 18:32:27.684756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.794 [2024-11-29 18:32:27.684762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.794 [2024-11-29 18:32:27.684773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.794 [2024-11-29 18:32:27.684802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.794 [2024-11-29 18:32:27.684811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.794 [2024-11-29 18:32:27.684817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.795 [2024-11-29 18:32:27.684825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.795 [2024-11-29 18:32:27.684838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.795 [2024-11-29 18:32:27.684845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.795 [2024-11-29 18:32:27.684851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.795 [2024-11-29 18:32:27.684860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.795 [2024-11-29 18:32:27.692626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.795 [2024-11-29 18:32:27.692660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.795 [2024-11-29 18:32:27.692668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.795 [2024-11-29 18:32:27.692675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.056 [2024-11-29 18:32:27.698601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.056 [2024-11-29 18:32:27.698765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:08.056 [2024-11-29 18:32:27.698776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.056 [2024-11-29 18:32:27.698786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.056 [2024-11-29 18:32:27.698835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.056 [2024-11-29 18:32:27.698844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.056 [2024-11-29 18:32:27.698850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.056 [2024-11-29 18:32:27.698857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.056 [2024-11-29 18:32:27.698880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.056 [2024-11-29 18:32:27.698888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.056 [2024-11-29 18:32:27.698893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.056 [2024-11-29 18:32:27.698901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.056 [2024-11-29 18:32:27.698957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.056 [2024-11-29 18:32:27.698968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.056 [2024-11-29 18:32:27.698974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.056 [2024-11-29 18:32:27.698981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.056 [2024-11-29 18:32:27.699007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.056 [2024-11-29 18:32:27.699016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:08.056 [2024-11-29 18:32:27.699021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.056 [2024-11-29 18:32:27.699029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.057 [2024-11-29 18:32:27.699060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.057 [2024-11-29 18:32:27.699070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.057 [2024-11-29 18:32:27.699076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.057 [2024-11-29 18:32:27.699083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.057 [2024-11-29 18:32:27.699119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.057 [2024-11-29 18:32:27.699129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.057 [2024-11-29 18:32:27.699134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.057 [2024-11-29 18:32:27.699141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.057 [2024-11-29 18:32:27.699243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.496 ms, result 0 00:19:08.057 18:32:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:08.057 18:32:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:08.057 [2024-11-29 18:32:27.911486] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:08.057 [2024-11-29 18:32:27.911743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:19:08.318 [2024-11-29 18:32:28.064047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.318 [2024-11-29 18:32:28.080511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.318 [2024-11-29 18:32:28.161732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:08.318 [2024-11-29 18:32:28.161784] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:08.582 [2024-11-29 18:32:28.304140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.304174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:08.582 [2024-11-29 18:32:28.304189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:08.582 [2024-11-29 18:32:28.304195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.305915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.305941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.582 [2024-11-29 18:32:28.305949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:19:08.582 [2024-11-29 18:32:28.305954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.306009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:08.582 [2024-11-29 18:32:28.306192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:08.582 [2024-11-29 18:32:28.306204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.306210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.582 [2024-11-29 18:32:28.306217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:08.582 [2024-11-29 18:32:28.306223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.307190] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:08.582 [2024-11-29 18:32:28.309448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.309489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:08.582 [2024-11-29 18:32:28.309496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:19:08.582 [2024-11-29 18:32:28.309504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.309551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.309559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:08.582 [2024-11-29 18:32:28.309565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:08.582 [2024-11-29 18:32:28.309570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.313787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.313900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.582 [2024-11-29 18:32:28.313915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.187 ms 00:19:08.582 [2024-11-29 18:32:28.313921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.314012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.314022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.582 [2024-11-29 18:32:28.314029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:08.582 [2024-11-29 18:32:28.314037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.314055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.314061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:08.582 [2024-11-29 18:32:28.314083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:08.582 [2024-11-29 18:32:28.314089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.314104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:08.582 [2024-11-29 18:32:28.315233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.315255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.582 [2024-11-29 18:32:28.315262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:19:08.582 [2024-11-29 18:32:28.315273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.315298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.315308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:08.582 [2024-11-29 18:32:28.315316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:08.582 [2024-11-29 18:32:28.315321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.315337] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:08.582 [2024-11-29 18:32:28.315350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:08.582 [2024-11-29 18:32:28.315378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:08.582 [2024-11-29 18:32:28.315392] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:08.582 [2024-11-29 18:32:28.315481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:08.582 [2024-11-29 18:32:28.315491] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:08.582 [2024-11-29 18:32:28.315499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:08.582 [2024-11-29 18:32:28.315507] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:08.582 [2024-11-29 18:32:28.315513] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:08.582 [2024-11-29 18:32:28.315520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:08.582 [2024-11-29 18:32:28.315526] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:08.582 [2024-11-29 18:32:28.315532] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:08.582 [2024-11-29 18:32:28.315539] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:08.582 [2024-11-29 18:32:28.315545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.315552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:08.582 [2024-11-29 18:32:28.315558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:19:08.582 [2024-11-29 18:32:28.315564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.315630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.582 [2024-11-29 18:32:28.315637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:08.582 [2024-11-29 18:32:28.315643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:08.582 [2024-11-29 18:32:28.315651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.582 [2024-11-29 18:32:28.315727] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:08.583 [2024-11-29 18:32:28.315738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:08.583 [2024-11-29 18:32:28.315745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:08.583 [2024-11-29 18:32:28.315762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:08.583 [2024-11-29 18:32:28.315782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.583 [2024-11-29 18:32:28.315794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:08.583 [2024-11-29 18:32:28.315799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:08.583 [2024-11-29 18:32:28.315804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.583 [2024-11-29 18:32:28.315809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:08.583 [2024-11-29 18:32:28.315814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:08.583 [2024-11-29 18:32:28.315819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:08.583 [2024-11-29 18:32:28.315829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:08.583 [2024-11-29 18:32:28.315846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:08.583 [2024-11-29 18:32:28.315862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:08.583 [2024-11-29 18:32:28.315880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:08.583 [2024-11-29 18:32:28.315900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.583 [2024-11-29 18:32:28.315911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:08.583 [2024-11-29 18:32:28.315917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.583 [2024-11-29 18:32:28.315929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:08.583 [2024-11-29 18:32:28.315935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:08.583 [2024-11-29 18:32:28.315941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.583 [2024-11-29 18:32:28.315946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:08.583 [2024-11-29 18:32:28.315952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:08.583 [2024-11-29 18:32:28.315958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:08.583 [2024-11-29 18:32:28.315971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:08.583 [2024-11-29 18:32:28.315977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.315983] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:08.583 [2024-11-29 18:32:28.315990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:08.583 [2024-11-29 18:32:28.315996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.583 [2024-11-29 18:32:28.316002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.583 [2024-11-29 18:32:28.316008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:08.583 [2024-11-29 18:32:28.316014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:08.583 [2024-11-29 18:32:28.316019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:08.583 [2024-11-29 18:32:28.316025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:08.583 [2024-11-29 18:32:28.316031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:08.583 [2024-11-29 18:32:28.316036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:08.583 [2024-11-29 18:32:28.316044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:08.583 [2024-11-29 18:32:28.316052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:08.583 [2024-11-29 18:32:28.316067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:08.583 [2024-11-29 18:32:28.316073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:08.583 [2024-11-29 18:32:28.316082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:08.583 [2024-11-29 18:32:28.316088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:08.583 [2024-11-29 18:32:28.316094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:08.583 [2024-11-29 18:32:28.316101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:08.583 [2024-11-29 18:32:28.316107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:08.583 [2024-11-29 18:32:28.316114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:08.583 [2024-11-29 18:32:28.316121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:08.583 [2024-11-29 18:32:28.316152] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:08.583 [2024-11-29 18:32:28.316161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:08.583 [2024-11-29 18:32:28.316177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:08.583 [2024-11-29 18:32:28.316184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:08.583 [2024-11-29 18:32:28.316190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:08.583 [2024-11-29 18:32:28.316196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.316203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:08.583 [2024-11-29 18:32:28.316211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:19:08.583 [2024-11-29 18:32:28.316217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.323810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.323839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:08.583 [2024-11-29 18:32:28.323846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:19:08.583 [2024-11-29 18:32:28.323852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.323930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.323937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:08.583 [2024-11-29 18:32:28.323945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:08.583 [2024-11-29 18:32:28.323950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.340123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.340175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:08.583 [2024-11-29 18:32:28.340193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.145 ms 00:19:08.583 [2024-11-29 18:32:28.340206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.340318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.340336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:08.583 [2024-11-29 18:32:28.340350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:08.583 [2024-11-29 18:32:28.340362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.340743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.340780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:08.583 [2024-11-29 18:32:28.340795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:08.583 [2024-11-29 18:32:28.340808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.583 [2024-11-29 18:32:28.341000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.583 [2024-11-29 18:32:28.341017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:08.584 [2024-11-29 18:32:28.341031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:19:08.584 [2024-11-29 18:32:28.341044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.347380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.347586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:08.584 [2024-11-29 18:32:28.347608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.305 ms 00:19:08.584 [2024-11-29 18:32:28.347619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.350674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:08.584 [2024-11-29 18:32:28.350770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:08.584 [2024-11-29 18:32:28.350780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.350787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:08.584 [2024-11-29 18:32:28.350793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:19:08.584 [2024-11-29 18:32:28.350798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.362298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.362386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:08.584 [2024-11-29 18:32:28.362404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.458 ms 00:19:08.584 [2024-11-29 18:32:28.362410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.363832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.363853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:08.584 [2024-11-29 18:32:28.363860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:19:08.584 [2024-11-29 18:32:28.363866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.365262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.365286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:08.584 [2024-11-29 18:32:28.365298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:19:08.584 [2024-11-29 18:32:28.365304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.365628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.365655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:08.584 [2024-11-29 18:32:28.365672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:08.584 [2024-11-29 18:32:28.365687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.379208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.379333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:08.584 [2024-11-29 18:32:28.379347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.463 ms 00:19:08.584 [2024-11-29 18:32:28.379354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.385113] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:08.584 [2024-11-29 18:32:28.396905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.396933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:08.584 [2024-11-29 18:32:28.396943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:19:08.584 [2024-11-29 18:32:28.396949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.397029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.397037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:08.584 [2024-11-29 18:32:28.397045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:08.584 [2024-11-29 18:32:28.397051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.397091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.397099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:08.584 [2024-11-29 18:32:28.397105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:08.584 [2024-11-29 18:32:28.397110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.397134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.397141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:08.584 [2024-11-29 18:32:28.397147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:08.584 [2024-11-29 18:32:28.397154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.397179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:08.584 [2024-11-29 18:32:28.397186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.397192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:08.584 [2024-11-29 18:32:28.397199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:08.584 [2024-11-29 18:32:28.397205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.400768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.400794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:08.584 [2024-11-29 18:32:28.400802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:19:08.584 [2024-11-29 18:32:28.400812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.400871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.584 [2024-11-29 18:32:28.400878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:08.584 [2024-11-29 18:32:28.400885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:08.584 [2024-11-29 18:32:28.400891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.584 [2024-11-29 18:32:28.401531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.584 [2024-11-29 18:32:28.402295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.157 ms, result 0 00:19:08.584 [2024-11-29 18:32:28.403141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:08.584 [2024-11-29 18:32:28.413413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:09.529  [2024-11-29T18:32:30.824Z] Copying: 15/256 [MB] (15 MBps) [2024-11-29T18:32:31.769Z] Copying: 29/256 [MB] (13 MBps) [2024-11-29T18:32:32.714Z] Copying: 39/256 [MB] (10 MBps) [2024-11-29T18:32:33.659Z] Copying: 60/256 [MB] (20 MBps) [2024-11-29T18:32:34.603Z] Copying: 80/256 [MB] (20 MBps) [2024-11-29T18:32:35.555Z] Copying: 103/256 [MB] (23 MBps) [2024-11-29T18:32:36.500Z] Copying: 125/256 [MB] (21 MBps) [2024-11-29T18:32:37.444Z] Copying: 145/256 [MB] (20 MBps) [2024-11-29T18:32:38.831Z] Copying: 165/256 [MB] (20 MBps) [2024-11-29T18:32:39.773Z] Copying: 179/256 [MB] (13 MBps) [2024-11-29T18:32:40.717Z] Copying: 201/256 [MB] (21 MBps) [2024-11-29T18:32:41.661Z] Copying: 222/256 [MB] (21 MBps) [2024-11-29T18:32:42.235Z] Copying: 244/256 [MB] (21 MBps) [2024-11-29T18:32:42.235Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-29 18:32:41.998204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.330 [2024-11-29 18:32:42.000215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.330 [2024-11-29 18:32:42.000272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.330 [2024-11-29 18:32:42.000293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.330 [2024-11-29 18:32:42.000303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.330 [2024-11-29 18:32:42.000326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:22.331 [2024-11-29 18:32:42.001042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.001080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.331 [2024-11-29 18:32:42.001093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:19:22.331 [2024-11-29 18:32:42.001103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.001370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.001381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.331 [2024-11-29 18:32:42.001400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:22.331 [2024-11-29 18:32:42.001409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.005148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.005317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.331 [2024-11-29 18:32:42.005335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:19:22.331 [2024-11-29 18:32:42.005344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.012311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.012519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.331 [2024-11-29 18:32:42.012540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.941 ms 00:19:22.331 [2024-11-29 18:32:42.012554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.015473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.015520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.331 [2024-11-29 18:32:42.015530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:19:22.331 [2024-11-29 18:32:42.015539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.021052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.021112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.331 [2024-11-29 18:32:42.021123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.463 ms 00:19:22.331 [2024-11-29 18:32:42.021131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.021267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.021278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.331 [2024-11-29 18:32:42.021303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:22.331 [2024-11-29 18:32:42.021310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.024914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.024970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.331 [2024-11-29 18:32:42.024980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:19:22.331 [2024-11-29 18:32:42.024987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.028117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.028170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.331 [2024-11-29 18:32:42.028180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:19:22.331 [2024-11-29 18:32:42.028188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.030479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.030529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.331 [2024-11-29 18:32:42.030539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:19:22.331 [2024-11-29 18:32:42.030545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.032965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.331 [2024-11-29 18:32:42.033018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.331 [2024-11-29 18:32:42.033028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:19:22.331 [2024-11-29 18:32:42.033035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.331 [2024-11-29 18:32:42.033093] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.331 [2024-11-29 18:32:42.033110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.331 [2024-11-29 18:32:42.033336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.332 [2024-11-29 18:32:42.033918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.333 [2024-11-29 18:32:42.033925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.333 [2024-11-29 18:32:42.033941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.333 [2024-11-29 18:32:42.033950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:22.333 [2024-11-29 18:32:42.033959] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.333 [2024-11-29 18:32:42.033967] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.333 [2024-11-29 18:32:42.033975] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.333 [2024-11-29 18:32:42.033984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.333 [2024-11-29 18:32:42.033991] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.333 [2024-11-29 18:32:42.034003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.333 [2024-11-29 18:32:42.034011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.333 [2024-11-29 18:32:42.034018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.333 [2024-11-29 18:32:42.034025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.333 [2024-11-29 18:32:42.034033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.333 [2024-11-29 18:32:42.034041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.333 [2024-11-29 18:32:42.034051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:19:22.333 [2024-11-29 18:32:42.034059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.036514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.333 [2024-11-29 18:32:42.036541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.333 [2024-11-29 18:32:42.036552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:19:22.333 [2024-11-29 18:32:42.036568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.036702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.333 [2024-11-29 18:32:42.036711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.333 [2024-11-29 18:32:42.036721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:22.333 [2024-11-29 18:32:42.036728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.044875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.044930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.333 [2024-11-29 18:32:42.044947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.044955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.045034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.045044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.333 [2024-11-29 18:32:42.045052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.045064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.045112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.045121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.333 [2024-11-29 18:32:42.045129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.045137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.045156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.045168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.333 [2024-11-29 18:32:42.045175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.045182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.058927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.058986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.333 [2024-11-29 18:32:42.058997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.059012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.333 [2024-11-29 18:32:42.069308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.333 [2024-11-29 18:32:42.069384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.333 [2024-11-29 18:32:42.069446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.333 [2024-11-29 18:32:42.069577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.333 [2024-11-29 18:32:42.069640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.333 [2024-11-29 18:32:42.069709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.333 [2024-11-29 18:32:42.069775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.333 [2024-11-29 18:32:42.069784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.333 [2024-11-29 18:32:42.069792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.333 [2024-11-29 18:32:42.069943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.734 ms, result 0 00:19:22.595 00:19:22.595 00:19:22.595 18:32:42 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:22.595 18:32:42 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:23.166 18:32:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:23.166 [2024-11-29 18:32:42.923485] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:23.166 [2024-11-29 18:32:42.924046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88113 ] 00:19:23.427 [2024-11-29 18:32:43.087409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.427 [2024-11-29 18:32:43.111910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.427 [2024-11-29 18:32:43.221676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.427 [2024-11-29 18:32:43.221756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.721 [2024-11-29 18:32:43.383208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.383268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.721 [2024-11-29 18:32:43.383284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:23.721 [2024-11-29 18:32:43.383294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.385885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.386109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.721 [2024-11-29 18:32:43.386130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:19:23.721 [2024-11-29 18:32:43.386138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.386395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.721 [2024-11-29 18:32:43.386744] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.721 [2024-11-29 18:32:43.386770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.386785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.721 [2024-11-29 18:32:43.386797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:23.721 [2024-11-29 18:32:43.386809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.388615] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:23.721 [2024-11-29 18:32:43.392547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.392739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:23.721 [2024-11-29 18:32:43.392766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:19:23.721 [2024-11-29 18:32:43.392775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.392952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.392982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:23.721 [2024-11-29 18:32:43.392994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:23.721 [2024-11-29 18:32:43.393002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.401490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.401532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.721 [2024-11-29 18:32:43.401545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.437 ms 00:19:23.721 [2024-11-29 18:32:43.401552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.401704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.401717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.721 [2024-11-29 18:32:43.401727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:23.721 [2024-11-29 18:32:43.401738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.401765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.401774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.721 [2024-11-29 18:32:43.401782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.721 [2024-11-29 18:32:43.401790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.401812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.721 [2024-11-29 18:32:43.403870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.404042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.721 [2024-11-29 18:32:43.404060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:19:23.721 [2024-11-29 18:32:43.404074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.404124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.721 [2024-11-29 18:32:43.404133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.721 [2024-11-29 18:32:43.404141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:23.721 [2024-11-29 18:32:43.404149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.721 [2024-11-29 18:32:43.404167] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:23.721 [2024-11-29 18:32:43.404187] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:23.722 [2024-11-29 18:32:43.404224] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:23.722 [2024-11-29 18:32:43.404253] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:23.722 [2024-11-29 18:32:43.404363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.722 [2024-11-29 18:32:43.404378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.722 [2024-11-29 18:32:43.404389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:23.722 [2024-11-29 18:32:43.404399] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404409] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.722 [2024-11-29 18:32:43.404425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.722 [2024-11-29 18:32:43.404433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.722 [2024-11-29 18:32:43.404444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.722 [2024-11-29 18:32:43.404473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.722 [2024-11-29 18:32:43.404482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.722 [2024-11-29 18:32:43.404491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:23.722 [2024-11-29 18:32:43.404499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.722 [2024-11-29 18:32:43.404588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.722 [2024-11-29 18:32:43.404598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.722 [2024-11-29 18:32:43.404606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:23.722 [2024-11-29 18:32:43.404615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.722 [2024-11-29 18:32:43.404723] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.722 [2024-11-29 18:32:43.404740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.722 [2024-11-29 18:32:43.404750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.722 [2024-11-29 18:32:43.404776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.722 [2024-11-29 18:32:43.404806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.722 [2024-11-29 18:32:43.404822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.722 [2024-11-29 18:32:43.404829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.722 [2024-11-29 18:32:43.404837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.722 [2024-11-29 18:32:43.404845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.722 [2024-11-29 18:32:43.404853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.722 [2024-11-29 18:32:43.404860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.722 [2024-11-29 18:32:43.404876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.722 [2024-11-29 18:32:43.404899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.722 [2024-11-29 18:32:43.404927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.722 [2024-11-29 18:32:43.404951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.722 [2024-11-29 18:32:43.404974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.722 [2024-11-29 18:32:43.404981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.722 [2024-11-29 18:32:43.404987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.722 [2024-11-29 18:32:43.404994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.722 [2024-11-29 18:32:43.405002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.722 [2024-11-29 18:32:43.405008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.722 [2024-11-29 18:32:43.405015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.722 [2024-11-29 18:32:43.405021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.722 [2024-11-29 18:32:43.405028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.722 [2024-11-29 18:32:43.405035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.722 [2024-11-29 18:32:43.405043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.405050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.722 [2024-11-29 18:32:43.405056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.722 [2024-11-29 18:32:43.405063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.405071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.722 [2024-11-29 18:32:43.405078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.722 [2024-11-29 18:32:43.405085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.722 [2024-11-29 18:32:43.405101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.722 [2024-11-29 18:32:43.405109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.722 [2024-11-29 18:32:43.405116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.722 [2024-11-29 18:32:43.405123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.722 [2024-11-29 18:32:43.405129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.722 [2024-11-29 18:32:43.405136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.722 [2024-11-29 18:32:43.405143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.722 [2024-11-29 18:32:43.405151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.722 [2024-11-29 18:32:43.405160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.722 [2024-11-29 18:32:43.405175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.722 [2024-11-29 18:32:43.405182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.722 [2024-11-29 18:32:43.405190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.722 [2024-11-29 18:32:43.405197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.723 [2024-11-29 18:32:43.405204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.723 [2024-11-29 18:32:43.405215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.723 [2024-11-29 18:32:43.405223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.723 [2024-11-29 18:32:43.405230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.723 [2024-11-29 18:32:43.405238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.723 [2024-11-29 18:32:43.405246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.723 [2024-11-29 18:32:43.405285] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.723 [2024-11-29 18:32:43.405297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.723 [2024-11-29 18:32:43.405318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.723 [2024-11-29 18:32:43.405325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.723 [2024-11-29 18:32:43.405332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.723 [2024-11-29 18:32:43.405339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.405347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.723 [2024-11-29 18:32:43.405355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:19:23.723 [2024-11-29 18:32:43.405362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.418836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.419007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.723 [2024-11-29 18:32:43.419024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.421 ms 00:19:23.723 [2024-11-29 18:32:43.419033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.419176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.419186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.723 [2024-11-29 18:32:43.419194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:23.723 [2024-11-29 18:32:43.419202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.439209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.439266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.723 [2024-11-29 18:32:43.439280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.983 ms 00:19:23.723 [2024-11-29 18:32:43.439289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.439388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.439402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.723 [2024-11-29 18:32:43.439412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.723 [2024-11-29 18:32:43.439421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.439946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.439976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.723 [2024-11-29 18:32:43.439997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:19:23.723 [2024-11-29 18:32:43.440007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.440173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.440196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.723 [2024-11-29 18:32:43.440207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:23.723 [2024-11-29 18:32:43.440220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.447855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.447897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.723 [2024-11-29 18:32:43.447914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.611 ms 00:19:23.723 [2024-11-29 18:32:43.447921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.451379] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:23.723 [2024-11-29 18:32:43.451568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.723 [2024-11-29 18:32:43.451585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.451593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.723 [2024-11-29 18:32:43.451603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:19:23.723 [2024-11-29 18:32:43.451611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.467614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.467671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.723 [2024-11-29 18:32:43.467686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.688 ms 00:19:23.723 [2024-11-29 18:32:43.467695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.470907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.471089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.723 [2024-11-29 18:32:43.471108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:23.723 [2024-11-29 18:32:43.471116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.474007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.474055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.723 [2024-11-29 18:32:43.474078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:19:23.723 [2024-11-29 18:32:43.474085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.474434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.474446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.723 [2024-11-29 18:32:43.474613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:23.723 [2024-11-29 18:32:43.474637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.498901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.723 [2024-11-29 18:32:43.499092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.723 [2024-11-29 18:32:43.499150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.219 ms 00:19:23.723 [2024-11-29 18:32:43.499174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.723 [2024-11-29 18:32:43.507634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:23.723 [2024-11-29 18:32:43.525725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.525901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.724 [2024-11-29 18:32:43.525928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.457 ms 00:19:23.724 [2024-11-29 18:32:43.525937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.526027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.526039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.724 [2024-11-29 18:32:43.526051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:23.724 [2024-11-29 18:32:43.526059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.526139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.526152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.724 [2024-11-29 18:32:43.526161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:23.724 [2024-11-29 18:32:43.526169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.526200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.526213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.724 [2024-11-29 18:32:43.526221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:23.724 [2024-11-29 18:32:43.526235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.526273] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.724 [2024-11-29 18:32:43.526284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.526292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.724 [2024-11-29 18:32:43.526300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:23.724 [2024-11-29 18:32:43.526307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.531778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.531823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.724 [2024-11-29 18:32:43.531835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.447 ms 00:19:23.724 [2024-11-29 18:32:43.531850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.531942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.724 [2024-11-29 18:32:43.531953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.724 [2024-11-29 18:32:43.531961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:23.724 [2024-11-29 18:32:43.531969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.724 [2024-11-29 18:32:43.532924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.724 [2024-11-29 18:32:43.534346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.435 ms, result 0 00:19:23.724 [2024-11-29 18:32:43.535615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.724 [2024-11-29 18:32:43.542942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.332  [2024-11-29T18:32:44.237Z] Copying: 4096/4096 [kB] (average 10163 kBps)[2024-11-29 18:32:43.948031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.332 [2024-11-29 18:32:43.949386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.332 [2024-11-29 18:32:43.949436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.332 [2024-11-29 18:32:43.949447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:24.332 [2024-11-29 18:32:43.949488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.332 [2024-11-29 18:32:43.949510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:24.332 [2024-11-29 18:32:43.950209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.332 [2024-11-29 18:32:43.950252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.332 [2024-11-29 18:32:43.950263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:19:24.332 [2024-11-29 18:32:43.950273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.332 [2024-11-29 18:32:43.952831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.332 [2024-11-29 18:32:43.952893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.332 [2024-11-29 18:32:43.952907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:19:24.332 [2024-11-29 18:32:43.952915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.957158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.957195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.333 [2024-11-29 18:32:43.957205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:19:24.333 [2024-11-29 18:32:43.957221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.964372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.964432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.333 [2024-11-29 18:32:43.964447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.116 ms 00:19:24.333 [2024-11-29 18:32:43.964474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.967023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.967220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.333 [2024-11-29 18:32:43.967240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:19:24.333 [2024-11-29 18:32:43.967248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.973024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.973081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.333 [2024-11-29 18:32:43.973093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.637 ms 00:19:24.333 [2024-11-29 18:32:43.973100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.973236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.973247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.333 [2024-11-29 18:32:43.973268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:24.333 [2024-11-29 18:32:43.973275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.977187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.977362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:24.333 [2024-11-29 18:32:43.977429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:19:24.333 [2024-11-29 18:32:43.977467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.980592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.980769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:24.333 [2024-11-29 18:32:43.980786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:19:24.333 [2024-11-29 18:32:43.980793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.983331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.983389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.333 [2024-11-29 18:32:43.983399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:19:24.333 [2024-11-29 18:32:43.983406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.985855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.333 [2024-11-29 18:32:43.985905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.333 [2024-11-29 18:32:43.985915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:19:24.333 [2024-11-29 18:32:43.985922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.333 [2024-11-29 18:32:43.985966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.333 [2024-11-29 18:32:43.985981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.333 [2024-11-29 18:32:43.986286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.334 [2024-11-29 18:32:43.986831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.334 [2024-11-29 18:32:43.986840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:24.334 [2024-11-29 18:32:43.986849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.334 [2024-11-29 18:32:43.986856] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.334 [2024-11-29 18:32:43.986863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.334 [2024-11-29 18:32:43.986871] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.335 [2024-11-29 18:32:43.986879] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.335 [2024-11-29 18:32:43.986893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.335 [2024-11-29 18:32:43.986901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.335 [2024-11-29 18:32:43.986907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.335 [2024-11-29 18:32:43.986914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.335 [2024-11-29 18:32:43.986921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.335 [2024-11-29 18:32:43.986929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.335 [2024-11-29 18:32:43.986938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:19:24.335 [2024-11-29 18:32:43.986945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.988940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.335 [2024-11-29 18:32:43.988975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.335 [2024-11-29 18:32:43.988986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:19:24.335 [2024-11-29 18:32:43.989002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.989139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.335 [2024-11-29 18:32:43.989148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.335 [2024-11-29 18:32:43.989157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:24.335 [2024-11-29 18:32:43.989165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.997299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:43.997348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.335 [2024-11-29 18:32:43.997365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:43.997372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.997438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:43.997446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.335 [2024-11-29 18:32:43.997476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:43.997484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.997530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:43.997540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.335 [2024-11-29 18:32:43.997548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:43.997559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:43.997576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:43.997584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.335 [2024-11-29 18:32:43.997592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:43.997598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.011078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.011130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.335 [2024-11-29 18:32:44.011141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.011156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.335 [2024-11-29 18:32:44.022439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.335 [2024-11-29 18:32:44.022536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.335 [2024-11-29 18:32:44.022601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.335 [2024-11-29 18:32:44.022695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.335 [2024-11-29 18:32:44.022760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.335 [2024-11-29 18:32:44.022827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.022887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.335 [2024-11-29 18:32:44.022898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.335 [2024-11-29 18:32:44.022905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.335 [2024-11-29 18:32:44.022913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.335 [2024-11-29 18:32:44.023059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.644 ms, result 0 00:19:24.335 00:19:24.335 00:19:24.335 18:32:44 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=88132 00:19:24.335 18:32:44 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 88132 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88132 ']' 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.335 18:32:44 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.335 18:32:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:24.597 [2024-11-29 18:32:44.314861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:24.597 [2024-11-29 18:32:44.315008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88132 ] 00:19:24.597 [2024-11-29 18:32:44.467809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.597 [2024-11-29 18:32:44.496408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.538 18:32:45 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.538 18:32:45 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:25.538 18:32:45 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:25.538 [2024-11-29 18:32:45.374504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.538 [2024-11-29 18:32:45.374588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.800 [2024-11-29 18:32:45.552917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.552984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.800 [2024-11-29 18:32:45.552998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.800 [2024-11-29 18:32:45.553009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.555609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.555829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.800 [2024-11-29 18:32:45.555850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:19:25.800 [2024-11-29 18:32:45.555861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.555989] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.800 [2024-11-29 18:32:45.556270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.800 [2024-11-29 18:32:45.556290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.556300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.800 [2024-11-29 18:32:45.556310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:25.800 [2024-11-29 18:32:45.556320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.558157] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:25.800 [2024-11-29 18:32:45.561838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.561893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:25.800 [2024-11-29 18:32:45.561907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:19:25.800 [2024-11-29 18:32:45.561915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.562005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.562015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:25.800 [2024-11-29 18:32:45.562028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:25.800 [2024-11-29 18:32:45.562039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.570435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.570497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.800 [2024-11-29 18:32:45.570510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.316 ms 00:19:25.800 [2024-11-29 18:32:45.570517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.570636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.570647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.800 [2024-11-29 18:32:45.570660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:25.800 [2024-11-29 18:32:45.570667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.570703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.570715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.800 [2024-11-29 18:32:45.570725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:25.800 [2024-11-29 18:32:45.570732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.570761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:25.800 [2024-11-29 18:32:45.572672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.572713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.800 [2024-11-29 18:32:45.572731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:19:25.800 [2024-11-29 18:32:45.572740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.572782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.572793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.800 [2024-11-29 18:32:45.572802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:25.800 [2024-11-29 18:32:45.572811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.572832] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:25.800 [2024-11-29 18:32:45.572853] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:25.800 [2024-11-29 18:32:45.572895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:25.800 [2024-11-29 18:32:45.572914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:25.800 [2024-11-29 18:32:45.573025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.800 [2024-11-29 18:32:45.573043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.800 [2024-11-29 18:32:45.573054] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.800 [2024-11-29 18:32:45.573066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573079] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573091] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:25.800 [2024-11-29 18:32:45.573099] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.800 [2024-11-29 18:32:45.573110] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.800 [2024-11-29 18:32:45.573118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.800 [2024-11-29 18:32:45.573127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.573135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.800 [2024-11-29 18:32:45.573146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:19:25.800 [2024-11-29 18:32:45.573153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.573245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.800 [2024-11-29 18:32:45.573253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.800 [2024-11-29 18:32:45.573264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:25.800 [2024-11-29 18:32:45.573270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.800 [2024-11-29 18:32:45.573377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.800 [2024-11-29 18:32:45.573388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.800 [2024-11-29 18:32:45.573401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.800 [2024-11-29 18:32:45.573437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.800 [2024-11-29 18:32:45.573488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.800 [2024-11-29 18:32:45.573506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.800 [2024-11-29 18:32:45.573515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:25.800 [2024-11-29 18:32:45.573525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.800 [2024-11-29 18:32:45.573533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.800 [2024-11-29 18:32:45.573543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:25.800 [2024-11-29 18:32:45.573559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.800 [2024-11-29 18:32:45.573577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.800 [2024-11-29 18:32:45.573607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.800 [2024-11-29 18:32:45.573631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.800 [2024-11-29 18:32:45.573660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:25.800 [2024-11-29 18:32:45.573668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.800 [2024-11-29 18:32:45.573678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.801 [2024-11-29 18:32:45.573686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:25.801 [2024-11-29 18:32:45.573695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.801 [2024-11-29 18:32:45.573704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.801 [2024-11-29 18:32:45.573713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:25.801 [2024-11-29 18:32:45.573720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.801 [2024-11-29 18:32:45.573729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.801 [2024-11-29 18:32:45.573736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:25.801 [2024-11-29 18:32:45.573746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.801 [2024-11-29 18:32:45.573753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.801 [2024-11-29 18:32:45.573762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:25.801 [2024-11-29 18:32:45.573769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.801 [2024-11-29 18:32:45.573778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.801 [2024-11-29 18:32:45.573784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:25.801 [2024-11-29 18:32:45.573793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.801 [2024-11-29 18:32:45.573801] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.801 [2024-11-29 18:32:45.573811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.801 [2024-11-29 18:32:45.573819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.801 [2024-11-29 18:32:45.573827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.801 [2024-11-29 18:32:45.573835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.801 [2024-11-29 18:32:45.573844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.801 [2024-11-29 18:32:45.573850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.801 [2024-11-29 18:32:45.573859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.801 [2024-11-29 18:32:45.573866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.801 [2024-11-29 18:32:45.573878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.801 [2024-11-29 18:32:45.573886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.801 [2024-11-29 18:32:45.573897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.573908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:25.801 [2024-11-29 18:32:45.573917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:25.801 [2024-11-29 18:32:45.573924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:25.801 [2024-11-29 18:32:45.573933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:25.801 [2024-11-29 18:32:45.573940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:25.801 [2024-11-29 18:32:45.573949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:25.801 [2024-11-29 18:32:45.573956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:25.801 [2024-11-29 18:32:45.573965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:25.801 [2024-11-29 18:32:45.573972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:25.801 [2024-11-29 18:32:45.573981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.573988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.573998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.574005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.574015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:25.801 [2024-11-29 18:32:45.574023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.801 [2024-11-29 18:32:45.574033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.574041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.801 [2024-11-29 18:32:45.574052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.801 [2024-11-29 18:32:45.574059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.801 [2024-11-29 18:32:45.574082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.801 [2024-11-29 18:32:45.574090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.574100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.801 [2024-11-29 18:32:45.574113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:19:25.801 [2024-11-29 18:32:45.574122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.586994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.587042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.801 [2024-11-29 18:32:45.587054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:19:25.801 [2024-11-29 18:32:45.587066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.587195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.587211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.801 [2024-11-29 18:32:45.587219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:25.801 [2024-11-29 18:32:45.587230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.598465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.598506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.801 [2024-11-29 18:32:45.598516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.194 ms 00:19:25.801 [2024-11-29 18:32:45.598528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.598589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.598604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.801 [2024-11-29 18:32:45.598614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:25.801 [2024-11-29 18:32:45.598623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.599086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.599115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.801 [2024-11-29 18:32:45.599133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:19:25.801 [2024-11-29 18:32:45.599143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.599287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.599309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.801 [2024-11-29 18:32:45.599322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:25.801 [2024-11-29 18:32:45.599335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.606563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.606606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.801 [2024-11-29 18:32:45.606616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.206 ms 00:19:25.801 [2024-11-29 18:32:45.606626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.618162] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:25.801 [2024-11-29 18:32:45.618215] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:25.801 [2024-11-29 18:32:45.618229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.618240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:25.801 [2024-11-29 18:32:45.618250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.511 ms 00:19:25.801 [2024-11-29 18:32:45.618260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.636492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.636546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:25.801 [2024-11-29 18:32:45.636558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.177 ms 00:19:25.801 [2024-11-29 18:32:45.636576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.639469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.639517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:25.801 [2024-11-29 18:32:45.639527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:19:25.801 [2024-11-29 18:32:45.639536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.642018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.642089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:25.801 [2024-11-29 18:32:45.642099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:19:25.801 [2024-11-29 18:32:45.642109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.801 [2024-11-29 18:32:45.642438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.801 [2024-11-29 18:32:45.642483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.801 [2024-11-29 18:32:45.642494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:25.801 [2024-11-29 18:32:45.642504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.665725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.665780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:25.802 [2024-11-29 18:32:45.665792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.197 ms 00:19:25.802 [2024-11-29 18:32:45.665807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.673760] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:25.802 [2024-11-29 18:32:45.690764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.690954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.802 [2024-11-29 18:32:45.690977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.868 ms 00:19:25.802 [2024-11-29 18:32:45.690986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.691077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.691090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:25.802 [2024-11-29 18:32:45.691102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:25.802 [2024-11-29 18:32:45.691109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.691163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.691172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.802 [2024-11-29 18:32:45.691182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:25.802 [2024-11-29 18:32:45.691190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.691215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.691223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.802 [2024-11-29 18:32:45.691243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.802 [2024-11-29 18:32:45.691250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.691284] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:25.802 [2024-11-29 18:32:45.691293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.691302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:25.802 [2024-11-29 18:32:45.691310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:25.802 [2024-11-29 18:32:45.691318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.696439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.696505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.802 [2024-11-29 18:32:45.696519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.097 ms 00:19:25.802 [2024-11-29 18:32:45.696529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.696615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.802 [2024-11-29 18:32:45.696628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.802 [2024-11-29 18:32:45.696637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:25.802 [2024-11-29 18:32:45.696646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.802 [2024-11-29 18:32:45.697747] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.802 [2024-11-29 18:32:45.699031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.517 ms, result 0 00:19:25.802 [2024-11-29 18:32:45.700559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.064 Some configs were skipped because the RPC state that can call them passed over. 00:19:26.064 18:32:45 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:26.064 [2024-11-29 18:32:45.938448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.064 [2024-11-29 18:32:45.938664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:26.064 [2024-11-29 18:32:45.938737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:19:26.064 [2024-11-29 18:32:45.938763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.064 [2024-11-29 18:32:45.938835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.181 ms, result 0 00:19:26.064 true 00:19:26.064 18:32:45 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:26.326 [2024-11-29 18:32:46.150545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.326 [2024-11-29 18:32:46.150729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:26.326 [2024-11-29 18:32:46.150794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:19:26.326 [2024-11-29 18:32:46.150819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.326 [2024-11-29 18:32:46.150877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.018 ms, result 0 00:19:26.326 true 00:19:26.326 18:32:46 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 88132 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88132 ']' 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88132 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88132 00:19:26.326 killing process with pid 88132 00:19:26.326 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.327 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.327 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88132' 00:19:26.327 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88132 00:19:26.327 18:32:46 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88132 00:19:26.590 [2024-11-29 18:32:46.322819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.322867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:26.590 [2024-11-29 18:32:46.322880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.590 [2024-11-29 18:32:46.322893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.322920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:26.590 [2024-11-29 18:32:46.323413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.323435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:26.590 [2024-11-29 18:32:46.323444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:19:26.590 [2024-11-29 18:32:46.323468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.323748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.323760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:26.590 [2024-11-29 18:32:46.323770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:26.590 [2024-11-29 18:32:46.323780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.328292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.328330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:26.590 [2024-11-29 18:32:46.328340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:19:26.590 [2024-11-29 18:32:46.328353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.335520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.335636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:26.590 [2024-11-29 18:32:46.335688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.132 ms 00:19:26.590 [2024-11-29 18:32:46.335715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.338242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.338361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:26.590 [2024-11-29 18:32:46.338375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:19:26.590 [2024-11-29 18:32:46.338385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.342078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.342118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:26.590 [2024-11-29 18:32:46.342130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:19:26.590 [2024-11-29 18:32:46.342139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.342266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.590 [2024-11-29 18:32:46.342278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:26.590 [2024-11-29 18:32:46.342286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:26.590 [2024-11-29 18:32:46.342295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.590 [2024-11-29 18:32:46.344504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.591 [2024-11-29 18:32:46.344627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:26.591 [2024-11-29 18:32:46.344642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:19:26.591 [2024-11-29 18:32:46.344654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.591 [2024-11-29 18:32:46.346390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.591 [2024-11-29 18:32:46.346430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:26.591 [2024-11-29 18:32:46.346440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:19:26.591 [2024-11-29 18:32:46.346448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.591 [2024-11-29 18:32:46.347821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.591 [2024-11-29 18:32:46.347937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:26.591 [2024-11-29 18:32:46.347951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:19:26.591 [2024-11-29 18:32:46.347959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.591 [2024-11-29 18:32:46.349314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.591 [2024-11-29 18:32:46.349354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:26.591 [2024-11-29 18:32:46.349363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:19:26.591 [2024-11-29 18:32:46.349372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.591 [2024-11-29 18:32:46.349406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:26.591 [2024-11-29 18:32:46.349421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.349994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:26.591 [2024-11-29 18:32:46.350103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:26.592 [2024-11-29 18:32:46.350308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:26.592 [2024-11-29 18:32:46.350315] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:26.592 [2024-11-29 18:32:46.350326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:26.592 [2024-11-29 18:32:46.350333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:26.592 [2024-11-29 18:32:46.350342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:26.592 [2024-11-29 18:32:46.350350] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:26.592 [2024-11-29 18:32:46.350360] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:26.592 [2024-11-29 18:32:46.350371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:26.592 [2024-11-29 18:32:46.350380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:26.592 [2024-11-29 18:32:46.350387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:26.592 [2024-11-29 18:32:46.350395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:26.592 [2024-11-29 18:32:46.350402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.592 [2024-11-29 18:32:46.350413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:26.592 [2024-11-29 18:32:46.350425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:19:26.592 [2024-11-29 18:32:46.350435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.352046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.592 [2024-11-29 18:32:46.352074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:26.592 [2024-11-29 18:32:46.352082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:19:26.592 [2024-11-29 18:32:46.352091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.352192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.592 [2024-11-29 18:32:46.352203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:26.592 [2024-11-29 18:32:46.352212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:26.592 [2024-11-29 18:32:46.352223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.358214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.358342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.592 [2024-11-29 18:32:46.358356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.358366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.358444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.358481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.592 [2024-11-29 18:32:46.358489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.358502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.358543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.358554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.592 [2024-11-29 18:32:46.358562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.358570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.358588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.358598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.592 [2024-11-29 18:32:46.358605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.358614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.369227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.369365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.592 [2024-11-29 18:32:46.369380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.369393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.592 [2024-11-29 18:32:46.377347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.592 [2024-11-29 18:32:46.377421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.592 [2024-11-29 18:32:46.377499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.592 [2024-11-29 18:32:46.377600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:26.592 [2024-11-29 18:32:46.377670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.592 [2024-11-29 18:32:46.377738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.592 [2024-11-29 18:32:46.377804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.592 [2024-11-29 18:32:46.377812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.592 [2024-11-29 18:32:46.377821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.592 [2024-11-29 18:32:46.377950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.111 ms, result 0 00:19:26.854 18:32:46 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:26.854 [2024-11-29 18:32:46.615964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:26.854 [2024-11-29 18:32:46.616095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88174 ] 00:19:27.116 [2024-11-29 18:32:46.775536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.116 [2024-11-29 18:32:46.804048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.116 [2024-11-29 18:32:46.913803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.116 [2024-11-29 18:32:46.914112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.379 [2024-11-29 18:32:47.074663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.074725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.379 [2024-11-29 18:32:47.074741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.379 [2024-11-29 18:32:47.074749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.077309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.077367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.379 [2024-11-29 18:32:47.077380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:19:27.379 [2024-11-29 18:32:47.077392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.077646] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.379 [2024-11-29 18:32:47.077985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.379 [2024-11-29 18:32:47.078018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.078027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.379 [2024-11-29 18:32:47.078037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:19:27.379 [2024-11-29 18:32:47.078045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.079838] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:27.379 [2024-11-29 18:32:47.083945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.084001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:27.379 [2024-11-29 18:32:47.084019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:19:27.379 [2024-11-29 18:32:47.084028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.084128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.084139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:27.379 [2024-11-29 18:32:47.084148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:27.379 [2024-11-29 18:32:47.084155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.092615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.092659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.379 [2024-11-29 18:32:47.092669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.412 ms 00:19:27.379 [2024-11-29 18:32:47.092686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.092834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.092846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.379 [2024-11-29 18:32:47.092855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:27.379 [2024-11-29 18:32:47.092867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.379 [2024-11-29 18:32:47.092897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.379 [2024-11-29 18:32:47.092911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.379 [2024-11-29 18:32:47.092919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.379 [2024-11-29 18:32:47.092930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.380 [2024-11-29 18:32:47.092951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:27.380 [2024-11-29 18:32:47.095085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.380 [2024-11-29 18:32:47.095245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.380 [2024-11-29 18:32:47.095260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:19:27.380 [2024-11-29 18:32:47.095276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.380 [2024-11-29 18:32:47.095325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.380 [2024-11-29 18:32:47.095334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.380 [2024-11-29 18:32:47.095343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:27.380 [2024-11-29 18:32:47.095350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.380 [2024-11-29 18:32:47.095368] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:27.380 [2024-11-29 18:32:47.095388] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:27.380 [2024-11-29 18:32:47.095422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:27.380 [2024-11-29 18:32:47.095444] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:27.380 [2024-11-29 18:32:47.095571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.380 [2024-11-29 18:32:47.095583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.380 [2024-11-29 18:32:47.095594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:27.380 [2024-11-29 18:32:47.095604] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.380 [2024-11-29 18:32:47.095613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.380 [2024-11-29 18:32:47.095621] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:27.380 [2024-11-29 18:32:47.095629] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.380 [2024-11-29 18:32:47.095636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.380 [2024-11-29 18:32:47.095649] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.380 [2024-11-29 18:32:47.095657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.380 [2024-11-29 18:32:47.095665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.380 [2024-11-29 18:32:47.095672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:27.380 [2024-11-29 18:32:47.095680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.380 [2024-11-29 18:32:47.095772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.380 [2024-11-29 18:32:47.095780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.380 [2024-11-29 18:32:47.095788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:27.380 [2024-11-29 18:32:47.095796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.380 [2024-11-29 18:32:47.095899] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.380 [2024-11-29 18:32:47.095916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.380 [2024-11-29 18:32:47.095926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.380 [2024-11-29 18:32:47.095935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.095943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.380 [2024-11-29 18:32:47.095952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.095960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:27.380 [2024-11-29 18:32:47.095971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.380 [2024-11-29 18:32:47.095983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:27.380 [2024-11-29 18:32:47.095991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.380 [2024-11-29 18:32:47.096000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.380 [2024-11-29 18:32:47.096008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:27.380 [2024-11-29 18:32:47.096015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.380 [2024-11-29 18:32:47.096023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.380 [2024-11-29 18:32:47.096031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:27.380 [2024-11-29 18:32:47.096038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.380 [2024-11-29 18:32:47.096054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.380 [2024-11-29 18:32:47.096077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.380 [2024-11-29 18:32:47.096104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.380 [2024-11-29 18:32:47.096127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.380 [2024-11-29 18:32:47.096150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.380 [2024-11-29 18:32:47.096171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.380 [2024-11-29 18:32:47.096184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.380 [2024-11-29 18:32:47.096191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:27.380 [2024-11-29 18:32:47.096197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.380 [2024-11-29 18:32:47.096203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.380 [2024-11-29 18:32:47.096210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:27.380 [2024-11-29 18:32:47.096219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.380 [2024-11-29 18:32:47.096235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:27.380 [2024-11-29 18:32:47.096242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096249] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.380 [2024-11-29 18:32:47.096257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.380 [2024-11-29 18:32:47.096264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.380 [2024-11-29 18:32:47.096287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.380 [2024-11-29 18:32:47.096294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.380 [2024-11-29 18:32:47.096301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.380 [2024-11-29 18:32:47.096308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.380 [2024-11-29 18:32:47.096314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.380 [2024-11-29 18:32:47.096321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.380 [2024-11-29 18:32:47.096329] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.380 [2024-11-29 18:32:47.096339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.380 [2024-11-29 18:32:47.096352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:27.380 [2024-11-29 18:32:47.096360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:27.380 [2024-11-29 18:32:47.096368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:27.380 [2024-11-29 18:32:47.096375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:27.380 [2024-11-29 18:32:47.096382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:27.380 [2024-11-29 18:32:47.096389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:27.380 [2024-11-29 18:32:47.096396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:27.380 [2024-11-29 18:32:47.096403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:27.380 [2024-11-29 18:32:47.096411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:27.380 [2024-11-29 18:32:47.096418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:27.380 [2024-11-29 18:32:47.096426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:27.380 [2024-11-29 18:32:47.096433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:27.380 [2024-11-29 18:32:47.096440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:27.380 [2024-11-29 18:32:47.096448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:27.381 [2024-11-29 18:32:47.096469] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.381 [2024-11-29 18:32:47.096480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.381 [2024-11-29 18:32:47.096492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.381 [2024-11-29 18:32:47.096502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.381 [2024-11-29 18:32:47.096510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.381 [2024-11-29 18:32:47.096518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.381 [2024-11-29 18:32:47.096526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.096534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.381 [2024-11-29 18:32:47.096541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:19:27.381 [2024-11-29 18:32:47.096549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.109935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.110110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.381 [2024-11-29 18:32:47.110128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.334 ms 00:19:27.381 [2024-11-29 18:32:47.110137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.110277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.110287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.381 [2024-11-29 18:32:47.110301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:27.381 [2024-11-29 18:32:47.110309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.127745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.127800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.381 [2024-11-29 18:32:47.127814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.411 ms 00:19:27.381 [2024-11-29 18:32:47.127822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.127914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.127926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.381 [2024-11-29 18:32:47.127935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.381 [2024-11-29 18:32:47.127943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.128412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.128444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.381 [2024-11-29 18:32:47.128475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:19:27.381 [2024-11-29 18:32:47.128492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.128649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.128662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.381 [2024-11-29 18:32:47.128675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:27.381 [2024-11-29 18:32:47.128683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.136263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.136317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.381 [2024-11-29 18:32:47.136330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.556 ms 00:19:27.381 [2024-11-29 18:32:47.136338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.140019] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:27.381 [2024-11-29 18:32:47.140206] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:27.381 [2024-11-29 18:32:47.140224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.140233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:27.381 [2024-11-29 18:32:47.140242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:19:27.381 [2024-11-29 18:32:47.140249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.156271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.156331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:27.381 [2024-11-29 18:32:47.156343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.966 ms 00:19:27.381 [2024-11-29 18:32:47.156351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.159328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.159504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:27.381 [2024-11-29 18:32:47.159522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:19:27.381 [2024-11-29 18:32:47.159529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.162221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.162264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:27.381 [2024-11-29 18:32:47.162274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:19:27.381 [2024-11-29 18:32:47.162281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.162646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.162660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.381 [2024-11-29 18:32:47.162669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:27.381 [2024-11-29 18:32:47.162693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.185439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.185510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:27.381 [2024-11-29 18:32:47.185522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.716 ms 00:19:27.381 [2024-11-29 18:32:47.185530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.193650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.381 [2024-11-29 18:32:47.210897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.210943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.381 [2024-11-29 18:32:47.210955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.275 ms 00:19:27.381 [2024-11-29 18:32:47.210963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.211049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.211061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:27.381 [2024-11-29 18:32:47.211077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:27.381 [2024-11-29 18:32:47.211086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.211139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.211149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.381 [2024-11-29 18:32:47.211157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:27.381 [2024-11-29 18:32:47.211165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.211192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.211201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.381 [2024-11-29 18:32:47.211210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:27.381 [2024-11-29 18:32:47.211220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.211255] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:27.381 [2024-11-29 18:32:47.211266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.211273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:27.381 [2024-11-29 18:32:47.211282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:27.381 [2024-11-29 18:32:47.211290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.216680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.216845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.381 [2024-11-29 18:32:47.216863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:19:27.381 [2024-11-29 18:32:47.216882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.216969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.381 [2024-11-29 18:32:47.216980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.381 [2024-11-29 18:32:47.216988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:27.381 [2024-11-29 18:32:47.216996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.381 [2024-11-29 18:32:47.217921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.381 [2024-11-29 18:32:47.219197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.971 ms, result 0 00:19:27.381 [2024-11-29 18:32:47.220006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.381 [2024-11-29 18:32:47.227767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.769  [2024-11-29T18:32:49.619Z] Copying: 15/256 [MB] (15 MBps) [2024-11-29T18:32:50.564Z] Copying: 33/256 [MB] (18 MBps) [2024-11-29T18:32:51.508Z] Copying: 51/256 [MB] (17 MBps) [2024-11-29T18:32:52.451Z] Copying: 69/256 [MB] (17 MBps) [2024-11-29T18:32:53.396Z] Copying: 84/256 [MB] (15 MBps) [2024-11-29T18:32:54.341Z] Copying: 103/256 [MB] (19 MBps) [2024-11-29T18:32:55.302Z] Copying: 125/256 [MB] (21 MBps) [2024-11-29T18:32:56.689Z] Copying: 143/256 [MB] (17 MBps) [2024-11-29T18:32:57.633Z] Copying: 162/256 [MB] (19 MBps) [2024-11-29T18:32:58.578Z] Copying: 181/256 [MB] (18 MBps) [2024-11-29T18:32:59.520Z] Copying: 200/256 [MB] (18 MBps) [2024-11-29T18:33:00.463Z] Copying: 221/256 [MB] (20 MBps) [2024-11-29T18:33:01.404Z] Copying: 237/256 [MB] (16 MBps) [2024-11-29T18:33:01.977Z] Copying: 251/256 [MB] (13 MBps) [2024-11-29T18:33:01.977Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-29 18:33:01.792701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.072 [2024-11-29 18:33:01.795747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.795842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.072 [2024-11-29 18:33:01.795874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:42.072 [2024-11-29 18:33:01.795896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.795952] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:42.072 [2024-11-29 18:33:01.797173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.797256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.072 [2024-11-29 18:33:01.797307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:19:42.072 [2024-11-29 18:33:01.797332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.798293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.798544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.072 [2024-11-29 18:33:01.798569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:19:42.072 [2024-11-29 18:33:01.798578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.802563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.802602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.072 [2024-11-29 18:33:01.802614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:19:42.072 [2024-11-29 18:33:01.802623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.809951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.810014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.072 [2024-11-29 18:33:01.810026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.284 ms 00:19:42.072 [2024-11-29 18:33:01.810038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.813000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.813191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.072 [2024-11-29 18:33:01.813211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:19:42.072 [2024-11-29 18:33:01.813218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.818405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.818495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.072 [2024-11-29 18:33:01.818507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.058 ms 00:19:42.072 [2024-11-29 18:33:01.818524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.818663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.818674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.072 [2024-11-29 18:33:01.818687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:42.072 [2024-11-29 18:33:01.818695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.822488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.822535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:42.072 [2024-11-29 18:33:01.822545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:19:42.072 [2024-11-29 18:33:01.822552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.825269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.825317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:42.072 [2024-11-29 18:33:01.825327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:19:42.072 [2024-11-29 18:33:01.825334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.827565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.827743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:42.072 [2024-11-29 18:33:01.827762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:19:42.072 [2024-11-29 18:33:01.827770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.830321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.072 [2024-11-29 18:33:01.830374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:42.072 [2024-11-29 18:33:01.830385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:19:42.072 [2024-11-29 18:33:01.830392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.072 [2024-11-29 18:33:01.830438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:42.072 [2024-11-29 18:33:01.830472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:42.072 [2024-11-29 18:33:01.830665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.830998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:42.073 [2024-11-29 18:33:01.831266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:42.073 [2024-11-29 18:33:01.831333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 810cac8b-8d30-4fbe-8f84-c99f968afe2f 00:19:42.073 [2024-11-29 18:33:01.831348] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:42.073 [2024-11-29 18:33:01.831356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:42.073 [2024-11-29 18:33:01.831363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:42.073 [2024-11-29 18:33:01.831371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:42.073 [2024-11-29 18:33:01.831378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:42.073 [2024-11-29 18:33:01.831389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:42.073 [2024-11-29 18:33:01.831397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:42.073 [2024-11-29 18:33:01.831403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:42.073 [2024-11-29 18:33:01.831409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:42.073 [2024-11-29 18:33:01.831417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.073 [2024-11-29 18:33:01.831428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:42.073 [2024-11-29 18:33:01.831438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:19:42.073 [2024-11-29 18:33:01.831446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.073 [2024-11-29 18:33:01.833797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.073 [2024-11-29 18:33:01.833830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:42.073 [2024-11-29 18:33:01.833841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:19:42.073 [2024-11-29 18:33:01.833856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.073 [2024-11-29 18:33:01.833994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.074 [2024-11-29 18:33:01.834003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:42.074 [2024-11-29 18:33:01.834013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:42.074 [2024-11-29 18:33:01.834020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.842137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.842307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.074 [2024-11-29 18:33:01.842373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.842396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.842497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.842520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.074 [2024-11-29 18:33:01.842540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.842564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.842629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.842653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.074 [2024-11-29 18:33:01.842674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.842734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.842775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.842796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.074 [2024-11-29 18:33:01.842816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.842861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.856018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.856192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.074 [2024-11-29 18:33:01.856247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.856265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.074 [2024-11-29 18:33:01.866285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.074 [2024-11-29 18:33:01.866358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.074 [2024-11-29 18:33:01.866419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.074 [2024-11-29 18:33:01.866544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:42.074 [2024-11-29 18:33:01.866606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.074 [2024-11-29 18:33:01.866678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.074 [2024-11-29 18:33:01.866748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.074 [2024-11-29 18:33:01.866756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.074 [2024-11-29 18:33:01.866765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.074 [2024-11-29 18:33:01.866912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.158 ms, result 0 00:19:42.334 00:19:42.334 00:19:42.334 18:33:02 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.906 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:42.906 18:33:02 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 88132 00:19:42.906 18:33:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88132 ']' 00:19:42.906 Process with pid 88132 is not found 00:19:42.906 ************************************ 00:19:42.906 END TEST ftl_trim 00:19:42.906 ************************************ 00:19:42.906 18:33:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88132 00:19:42.906 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88132) - No such process 00:19:42.906 18:33:02 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 88132 is not found' 00:19:42.906 00:19:42.906 real 1m2.845s 00:19:42.906 user 1m27.049s 00:19:42.906 sys 0m5.054s 00:19:42.906 18:33:02 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.906 18:33:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:42.906 18:33:02 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:42.906 18:33:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:42.906 18:33:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.906 18:33:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 ************************************ 00:19:43.168 START TEST ftl_restore 00:19:43.168 ************************************ 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:43.168 * Looking for test storage... 00:19:43.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.168 18:33:02 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:43.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.168 --rc genhtml_branch_coverage=1 00:19:43.168 --rc genhtml_function_coverage=1 00:19:43.168 --rc genhtml_legend=1 00:19:43.168 --rc geninfo_all_blocks=1 00:19:43.168 --rc geninfo_unexecuted_blocks=1 00:19:43.168 00:19:43.168 ' 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:43.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.168 --rc genhtml_branch_coverage=1 00:19:43.168 --rc genhtml_function_coverage=1 00:19:43.168 --rc genhtml_legend=1 00:19:43.168 --rc geninfo_all_blocks=1 00:19:43.168 --rc geninfo_unexecuted_blocks=1 00:19:43.168 00:19:43.168 ' 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:43.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.168 --rc genhtml_branch_coverage=1 00:19:43.168 --rc genhtml_function_coverage=1 00:19:43.168 --rc genhtml_legend=1 00:19:43.168 --rc geninfo_all_blocks=1 00:19:43.168 --rc geninfo_unexecuted_blocks=1 00:19:43.168 00:19:43.168 ' 00:19:43.168 18:33:02 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:43.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.168 --rc genhtml_branch_coverage=1 00:19:43.168 --rc genhtml_function_coverage=1 00:19:43.168 --rc genhtml_legend=1 00:19:43.168 --rc geninfo_all_blocks=1 00:19:43.168 --rc geninfo_unexecuted_blocks=1 00:19:43.168 00:19:43.168 ' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.168 18:33:02 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:43.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cfDiENvsFG 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88405 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88405 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88405 ']' 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.168 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 18:33:03 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.431 [2024-11-29 18:33:03.101408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:43.431 [2024-11-29 18:33:03.101595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88405 ] 00:19:43.431 [2024-11-29 18:33:03.268961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.431 [2024-11-29 18:33:03.300664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.377 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.377 18:33:03 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:44.377 18:33:03 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.377 18:33:04 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.377 18:33:04 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:44.377 18:33:04 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.377 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:44.377 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.377 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:44.377 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:44.377 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.638 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.639 { 00:19:44.639 "name": "nvme0n1", 00:19:44.639 "aliases": [ 00:19:44.639 "5fbdde01-30df-41eb-a85e-2a29df6773c9" 00:19:44.639 ], 00:19:44.639 "product_name": "NVMe disk", 00:19:44.639 "block_size": 4096, 00:19:44.639 "num_blocks": 1310720, 00:19:44.639 "uuid": "5fbdde01-30df-41eb-a85e-2a29df6773c9", 00:19:44.639 "numa_id": -1, 00:19:44.639 "assigned_rate_limits": { 00:19:44.639 "rw_ios_per_sec": 0, 00:19:44.639 "rw_mbytes_per_sec": 0, 00:19:44.639 "r_mbytes_per_sec": 0, 00:19:44.639 "w_mbytes_per_sec": 0 00:19:44.639 }, 00:19:44.639 "claimed": true, 00:19:44.639 "claim_type": "read_many_write_one", 00:19:44.639 "zoned": false, 00:19:44.639 "supported_io_types": { 00:19:44.639 "read": true, 00:19:44.639 "write": true, 00:19:44.639 "unmap": true, 00:19:44.639 "flush": true, 00:19:44.639 "reset": true, 00:19:44.639 "nvme_admin": true, 00:19:44.639 "nvme_io": true, 00:19:44.639 "nvme_io_md": false, 00:19:44.639 "write_zeroes": true, 00:19:44.639 "zcopy": false, 00:19:44.639 "get_zone_info": false, 00:19:44.639 "zone_management": false, 00:19:44.639 "zone_append": false, 00:19:44.639 "compare": true, 00:19:44.639 "compare_and_write": false, 00:19:44.639 "abort": true, 00:19:44.639 "seek_hole": false, 00:19:44.639 "seek_data": false, 00:19:44.639 "copy": true, 00:19:44.639 "nvme_iov_md": false 00:19:44.639 }, 00:19:44.639 "driver_specific": { 00:19:44.639 "nvme": [ 00:19:44.639 { 00:19:44.639 "pci_address": "0000:00:11.0", 00:19:44.639 "trid": { 00:19:44.639 "trtype": "PCIe", 00:19:44.639 "traddr": "0000:00:11.0" 00:19:44.639 }, 00:19:44.639 "ctrlr_data": { 00:19:44.639 "cntlid": 0, 00:19:44.639 "vendor_id": "0x1b36", 00:19:44.639 "model_number": "QEMU NVMe Ctrl", 00:19:44.639 "serial_number": "12341", 00:19:44.639 "firmware_revision": "8.0.0", 00:19:44.639 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.639 "oacs": { 00:19:44.639 "security": 0, 00:19:44.639 "format": 1, 00:19:44.639 "firmware": 0, 00:19:44.639 "ns_manage": 1 00:19:44.639 }, 00:19:44.639 "multi_ctrlr": false, 00:19:44.639 "ana_reporting": false 00:19:44.639 }, 00:19:44.639 "vs": { 00:19:44.639 "nvme_version": "1.4" 00:19:44.639 }, 00:19:44.639 "ns_data": { 00:19:44.639 "id": 1, 00:19:44.639 "can_share": false 00:19:44.639 } 00:19:44.639 } 00:19:44.639 ], 00:19:44.639 "mp_policy": "active_passive" 00:19:44.639 } 00:19:44.639 } 00:19:44.639 ]' 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:44.639 18:33:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:44.639 18:33:04 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:44.639 18:33:04 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.639 18:33:04 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:44.639 18:33:04 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.639 18:33:04 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.901 18:33:04 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=2300869b-4d61-4517-b49c-5f6c70ada65d 00:19:44.901 18:33:04 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.901 18:33:04 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2300869b-4d61-4517-b49c-5f6c70ada65d 00:19:45.160 18:33:04 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.420 18:33:05 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=343e90de-d3a9-47bd-a146-2f2cff852dc2 00:19:45.420 18:33:05 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 343e90de-d3a9-47bd-a146-2f2cff852dc2 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:45.680 18:33:05 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.680 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.680 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.680 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:45.680 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:45.680 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.941 { 00:19:45.941 "name": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:45.941 "aliases": [ 00:19:45.941 "lvs/nvme0n1p0" 00:19:45.941 ], 00:19:45.941 "product_name": "Logical Volume", 00:19:45.941 "block_size": 4096, 00:19:45.941 "num_blocks": 26476544, 00:19:45.941 "uuid": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:45.941 "assigned_rate_limits": { 00:19:45.941 "rw_ios_per_sec": 0, 00:19:45.941 "rw_mbytes_per_sec": 0, 00:19:45.941 "r_mbytes_per_sec": 0, 00:19:45.941 "w_mbytes_per_sec": 0 00:19:45.941 }, 00:19:45.941 "claimed": false, 00:19:45.941 "zoned": false, 00:19:45.941 "supported_io_types": { 00:19:45.941 "read": true, 00:19:45.941 "write": true, 00:19:45.941 "unmap": true, 00:19:45.941 "flush": false, 00:19:45.941 "reset": true, 00:19:45.941 "nvme_admin": false, 00:19:45.941 "nvme_io": false, 00:19:45.941 "nvme_io_md": false, 00:19:45.941 "write_zeroes": true, 00:19:45.941 "zcopy": false, 00:19:45.941 "get_zone_info": false, 00:19:45.941 "zone_management": false, 00:19:45.941 "zone_append": false, 00:19:45.941 "compare": false, 00:19:45.941 "compare_and_write": false, 00:19:45.941 "abort": false, 00:19:45.941 "seek_hole": true, 00:19:45.941 "seek_data": true, 00:19:45.941 "copy": false, 00:19:45.941 "nvme_iov_md": false 00:19:45.941 }, 00:19:45.941 "driver_specific": { 00:19:45.941 "lvol": { 00:19:45.941 "lvol_store_uuid": "343e90de-d3a9-47bd-a146-2f2cff852dc2", 00:19:45.941 "base_bdev": "nvme0n1", 00:19:45.941 "thin_provision": true, 00:19:45.941 "num_allocated_clusters": 0, 00:19:45.941 "snapshot": false, 00:19:45.941 "clone": false, 00:19:45.941 "esnap_clone": false 00:19:45.941 } 00:19:45.941 } 00:19:45.941 } 00:19:45.941 ]' 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.941 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.941 18:33:05 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.941 18:33:05 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.941 18:33:05 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.202 18:33:05 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.202 18:33:05 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.202 18:33:05 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.202 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.202 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.202 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:46.202 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:46.202 18:33:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.465 { 00:19:46.465 "name": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:46.465 "aliases": [ 00:19:46.465 "lvs/nvme0n1p0" 00:19:46.465 ], 00:19:46.465 "product_name": "Logical Volume", 00:19:46.465 "block_size": 4096, 00:19:46.465 "num_blocks": 26476544, 00:19:46.465 "uuid": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:46.465 "assigned_rate_limits": { 00:19:46.465 "rw_ios_per_sec": 0, 00:19:46.465 "rw_mbytes_per_sec": 0, 00:19:46.465 "r_mbytes_per_sec": 0, 00:19:46.465 "w_mbytes_per_sec": 0 00:19:46.465 }, 00:19:46.465 "claimed": false, 00:19:46.465 "zoned": false, 00:19:46.465 "supported_io_types": { 00:19:46.465 "read": true, 00:19:46.465 "write": true, 00:19:46.465 "unmap": true, 00:19:46.465 "flush": false, 00:19:46.465 "reset": true, 00:19:46.465 "nvme_admin": false, 00:19:46.465 "nvme_io": false, 00:19:46.465 "nvme_io_md": false, 00:19:46.465 "write_zeroes": true, 00:19:46.465 "zcopy": false, 00:19:46.465 "get_zone_info": false, 00:19:46.465 "zone_management": false, 00:19:46.465 "zone_append": false, 00:19:46.465 "compare": false, 00:19:46.465 "compare_and_write": false, 00:19:46.465 "abort": false, 00:19:46.465 "seek_hole": true, 00:19:46.465 "seek_data": true, 00:19:46.465 "copy": false, 00:19:46.465 "nvme_iov_md": false 00:19:46.465 }, 00:19:46.465 "driver_specific": { 00:19:46.465 "lvol": { 00:19:46.465 "lvol_store_uuid": "343e90de-d3a9-47bd-a146-2f2cff852dc2", 00:19:46.465 "base_bdev": "nvme0n1", 00:19:46.465 "thin_provision": true, 00:19:46.465 "num_allocated_clusters": 0, 00:19:46.465 "snapshot": false, 00:19:46.465 "clone": false, 00:19:46.465 "esnap_clone": false 00:19:46.465 } 00:19:46.465 } 00:19:46.465 } 00:19:46.465 ]' 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.465 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.465 18:33:06 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.465 18:33:06 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.726 18:33:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:46.726 18:33:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.726 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.726 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.726 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:46.726 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:46.726 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36515175-6ce2-4035-9f70-dbb23ffa81c7 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.988 { 00:19:46.988 "name": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:46.988 "aliases": [ 00:19:46.988 "lvs/nvme0n1p0" 00:19:46.988 ], 00:19:46.988 "product_name": "Logical Volume", 00:19:46.988 "block_size": 4096, 00:19:46.988 "num_blocks": 26476544, 00:19:46.988 "uuid": "36515175-6ce2-4035-9f70-dbb23ffa81c7", 00:19:46.988 "assigned_rate_limits": { 00:19:46.988 "rw_ios_per_sec": 0, 00:19:46.988 "rw_mbytes_per_sec": 0, 00:19:46.988 "r_mbytes_per_sec": 0, 00:19:46.988 "w_mbytes_per_sec": 0 00:19:46.988 }, 00:19:46.988 "claimed": false, 00:19:46.988 "zoned": false, 00:19:46.988 "supported_io_types": { 00:19:46.988 "read": true, 00:19:46.988 "write": true, 00:19:46.988 "unmap": true, 00:19:46.988 "flush": false, 00:19:46.988 "reset": true, 00:19:46.988 "nvme_admin": false, 00:19:46.988 "nvme_io": false, 00:19:46.988 "nvme_io_md": false, 00:19:46.988 "write_zeroes": true, 00:19:46.988 "zcopy": false, 00:19:46.988 "get_zone_info": false, 00:19:46.988 "zone_management": false, 00:19:46.988 "zone_append": false, 00:19:46.988 "compare": false, 00:19:46.988 "compare_and_write": false, 00:19:46.988 "abort": false, 00:19:46.988 "seek_hole": true, 00:19:46.988 "seek_data": true, 00:19:46.988 "copy": false, 00:19:46.988 "nvme_iov_md": false 00:19:46.988 }, 00:19:46.988 "driver_specific": { 00:19:46.988 "lvol": { 00:19:46.988 "lvol_store_uuid": "343e90de-d3a9-47bd-a146-2f2cff852dc2", 00:19:46.988 "base_bdev": "nvme0n1", 00:19:46.988 "thin_provision": true, 00:19:46.988 "num_allocated_clusters": 0, 00:19:46.988 "snapshot": false, 00:19:46.988 "clone": false, 00:19:46.988 "esnap_clone": false 00:19:46.988 } 00:19:46.988 } 00:19:46.988 } 00:19:46.988 ]' 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.988 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 36515175-6ce2-4035-9f70-dbb23ffa81c7 --l2p_dram_limit 10' 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:46.988 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:46.988 18:33:06 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 36515175-6ce2-4035-9f70-dbb23ffa81c7 --l2p_dram_limit 10 -c nvc0n1p0 00:19:47.251 [2024-11-29 18:33:06.921989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.251 [2024-11-29 18:33:06.922031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:47.251 [2024-11-29 18:33:06.922042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.251 [2024-11-29 18:33:06.922050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.251 [2024-11-29 18:33:06.922105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.251 [2024-11-29 18:33:06.922116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.251 [2024-11-29 18:33:06.922123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:47.251 [2024-11-29 18:33:06.922131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.251 [2024-11-29 18:33:06.922148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:47.251 [2024-11-29 18:33:06.922365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:47.251 [2024-11-29 18:33:06.922381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.251 [2024-11-29 18:33:06.922389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.251 [2024-11-29 18:33:06.922396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:19:47.252 [2024-11-29 18:33:06.922402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.922451] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:19:47.252 [2024-11-29 18:33:06.923428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.923462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:47.252 [2024-11-29 18:33:06.923472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:47.252 [2024-11-29 18:33:06.923478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.928208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.928237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.252 [2024-11-29 18:33:06.928246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:19:47.252 [2024-11-29 18:33:06.928252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.928315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.928322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.252 [2024-11-29 18:33:06.928330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:47.252 [2024-11-29 18:33:06.928336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.928372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.928379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:47.252 [2024-11-29 18:33:06.928386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:47.252 [2024-11-29 18:33:06.928391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.928409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.252 [2024-11-29 18:33:06.929674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.929698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.252 [2024-11-29 18:33:06.929706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:19:47.252 [2024-11-29 18:33:06.929713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.929745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.929754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:47.252 [2024-11-29 18:33:06.929760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:47.252 [2024-11-29 18:33:06.929769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.929782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:47.252 [2024-11-29 18:33:06.929897] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:47.252 [2024-11-29 18:33:06.929915] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:47.252 [2024-11-29 18:33:06.929924] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:47.252 [2024-11-29 18:33:06.929934] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:47.252 [2024-11-29 18:33:06.929943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:47.252 [2024-11-29 18:33:06.929951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:47.252 [2024-11-29 18:33:06.929959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:47.252 [2024-11-29 18:33:06.929964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:47.252 [2024-11-29 18:33:06.929971] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:47.252 [2024-11-29 18:33:06.929977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.929984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:47.252 [2024-11-29 18:33:06.929989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:47.252 [2024-11-29 18:33:06.929996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.930059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.252 [2024-11-29 18:33:06.930077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:47.252 [2024-11-29 18:33:06.930082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:47.252 [2024-11-29 18:33:06.930090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.252 [2024-11-29 18:33:06.930162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:47.252 [2024-11-29 18:33:06.930170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:47.252 [2024-11-29 18:33:06.930177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:47.252 [2024-11-29 18:33:06.930199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:47.252 [2024-11-29 18:33:06.930217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.252 [2024-11-29 18:33:06.930229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:47.252 [2024-11-29 18:33:06.930236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:47.252 [2024-11-29 18:33:06.930241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.252 [2024-11-29 18:33:06.930250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:47.252 [2024-11-29 18:33:06.930255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:47.252 [2024-11-29 18:33:06.930262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:47.252 [2024-11-29 18:33:06.930273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:47.252 [2024-11-29 18:33:06.930289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:47.252 [2024-11-29 18:33:06.930307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:47.252 [2024-11-29 18:33:06.930323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:47.252 [2024-11-29 18:33:06.930342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:47.252 [2024-11-29 18:33:06.930358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.252 [2024-11-29 18:33:06.930369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:47.252 [2024-11-29 18:33:06.930375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:47.252 [2024-11-29 18:33:06.930380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.252 [2024-11-29 18:33:06.930386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:47.252 [2024-11-29 18:33:06.930390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:47.252 [2024-11-29 18:33:06.930398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:47.252 [2024-11-29 18:33:06.930410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:47.252 [2024-11-29 18:33:06.930415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930421] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:47.252 [2024-11-29 18:33:06.930427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:47.252 [2024-11-29 18:33:06.930435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.252 [2024-11-29 18:33:06.930462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:47.252 [2024-11-29 18:33:06.930468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:47.252 [2024-11-29 18:33:06.930474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:47.252 [2024-11-29 18:33:06.930479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:47.252 [2024-11-29 18:33:06.930485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:47.252 [2024-11-29 18:33:06.930490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:47.252 [2024-11-29 18:33:06.930500] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:47.252 [2024-11-29 18:33:06.930507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.252 [2024-11-29 18:33:06.930515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:47.252 [2024-11-29 18:33:06.930521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:47.252 [2024-11-29 18:33:06.930527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:47.253 [2024-11-29 18:33:06.930532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:47.253 [2024-11-29 18:33:06.930539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:47.253 [2024-11-29 18:33:06.930544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:47.253 [2024-11-29 18:33:06.930554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:47.253 [2024-11-29 18:33:06.930560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:47.253 [2024-11-29 18:33:06.930567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:47.253 [2024-11-29 18:33:06.930573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:47.253 [2024-11-29 18:33:06.930603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:47.253 [2024-11-29 18:33:06.930612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:47.253 [2024-11-29 18:33:06.930625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:47.253 [2024-11-29 18:33:06.930632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:47.253 [2024-11-29 18:33:06.930638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:47.253 [2024-11-29 18:33:06.930645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.253 [2024-11-29 18:33:06.930651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:47.253 [2024-11-29 18:33:06.930659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:47.253 [2024-11-29 18:33:06.930664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.253 [2024-11-29 18:33:06.930693] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:47.253 [2024-11-29 18:33:06.930700] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.464 [2024-11-29 18:33:10.762029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.762160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.464 [2024-11-29 18:33:10.762182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3831.309 ms 00:19:51.464 [2024-11-29 18:33:10.762192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.776321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.776376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.464 [2024-11-29 18:33:10.776393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.001 ms 00:19:51.464 [2024-11-29 18:33:10.776405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.776545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.776557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.464 [2024-11-29 18:33:10.776575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:51.464 [2024-11-29 18:33:10.776583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.788794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.788845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.464 [2024-11-29 18:33:10.788860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.160 ms 00:19:51.464 [2024-11-29 18:33:10.788871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.788904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.788913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.464 [2024-11-29 18:33:10.788928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.464 [2024-11-29 18:33:10.788936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.789483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.789514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.464 [2024-11-29 18:33:10.789528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:19:51.464 [2024-11-29 18:33:10.789537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.789661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.789675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.464 [2024-11-29 18:33:10.789687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:51.464 [2024-11-29 18:33:10.789695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.797561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.797601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.464 [2024-11-29 18:33:10.797613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.840 ms 00:19:51.464 [2024-11-29 18:33:10.797622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.823429] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:51.464 [2024-11-29 18:33:10.827161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.827216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.464 [2024-11-29 18:33:10.827229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.471 ms 00:19:51.464 [2024-11-29 18:33:10.827240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.914921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.915002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.464 [2024-11-29 18:33:10.915023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.639 ms 00:19:51.464 [2024-11-29 18:33:10.915039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.915253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.915268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.464 [2024-11-29 18:33:10.915277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:19:51.464 [2024-11-29 18:33:10.915288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.921989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.922052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.464 [2024-11-29 18:33:10.922079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.679 ms 00:19:51.464 [2024-11-29 18:33:10.922091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.927961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.928025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.464 [2024-11-29 18:33:10.928036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.810 ms 00:19:51.464 [2024-11-29 18:33:10.928045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.928389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.928404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.464 [2024-11-29 18:33:10.928414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:19:51.464 [2024-11-29 18:33:10.928427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.975490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.975555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:51.464 [2024-11-29 18:33:10.975570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.981 ms 00:19:51.464 [2024-11-29 18:33:10.975587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.983263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.983331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:51.464 [2024-11-29 18:33:10.983343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.611 ms 00:19:51.464 [2024-11-29 18:33:10.983354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.990256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.990323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:51.464 [2024-11-29 18:33:10.990333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.848 ms 00:19:51.464 [2024-11-29 18:33:10.990342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.997397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.464 [2024-11-29 18:33:10.997501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.464 [2024-11-29 18:33:10.997513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:19:51.464 [2024-11-29 18:33:10.997525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.464 [2024-11-29 18:33:10.997582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.465 [2024-11-29 18:33:10.997602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.465 [2024-11-29 18:33:10.997613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.465 [2024-11-29 18:33:10.997624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.465 [2024-11-29 18:33:10.997715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.465 [2024-11-29 18:33:10.997729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.465 [2024-11-29 18:33:10.997738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:51.465 [2024-11-29 18:33:10.997750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.465 [2024-11-29 18:33:10.998991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4076.513 ms, result 0 00:19:51.465 { 00:19:51.465 "name": "ftl0", 00:19:51.465 "uuid": "d5cc8cc9-2901-46c0-b9e1-4e931c6522e6" 00:19:51.465 } 00:19:51.465 18:33:11 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:51.465 18:33:11 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:51.465 18:33:11 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:51.465 18:33:11 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:51.727 [2024-11-29 18:33:11.446275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.446346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.727 [2024-11-29 18:33:11.446364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:51.727 [2024-11-29 18:33:11.446373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.446402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.727 [2024-11-29 18:33:11.447226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.447280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.727 [2024-11-29 18:33:11.447292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:51.727 [2024-11-29 18:33:11.447302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.447605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.447693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.727 [2024-11-29 18:33:11.447707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:51.727 [2024-11-29 18:33:11.447717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.450970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.451003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.727 [2024-11-29 18:33:11.451012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:19:51.727 [2024-11-29 18:33:11.451022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.457511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.457556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.727 [2024-11-29 18:33:11.457567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.470 ms 00:19:51.727 [2024-11-29 18:33:11.457580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.460802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.460867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.727 [2024-11-29 18:33:11.460878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:19:51.727 [2024-11-29 18:33:11.460888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.727 [2024-11-29 18:33:11.467539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.727 [2024-11-29 18:33:11.467605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.727 [2024-11-29 18:33:11.467617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:19:51.727 [2024-11-29 18:33:11.467627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.467766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.728 [2024-11-29 18:33:11.467787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.728 [2024-11-29 18:33:11.467802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:51.728 [2024-11-29 18:33:11.467813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.471219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.728 [2024-11-29 18:33:11.471283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.728 [2024-11-29 18:33:11.471294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:19:51.728 [2024-11-29 18:33:11.471303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.474441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.728 [2024-11-29 18:33:11.474518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.728 [2024-11-29 18:33:11.474528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:19:51.728 [2024-11-29 18:33:11.474537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.477132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.728 [2024-11-29 18:33:11.477193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.728 [2024-11-29 18:33:11.477203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:19:51.728 [2024-11-29 18:33:11.477213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.479763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.728 [2024-11-29 18:33:11.479827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.728 [2024-11-29 18:33:11.479837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.474 ms 00:19:51.728 [2024-11-29 18:33:11.479846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.728 [2024-11-29 18:33:11.479891] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.728 [2024-11-29 18:33:11.479910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.479999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.728 [2024-11-29 18:33:11.480577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.729 [2024-11-29 18:33:11.480832] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.729 [2024-11-29 18:33:11.480840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:19:51.729 [2024-11-29 18:33:11.480851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.729 [2024-11-29 18:33:11.480858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.729 [2024-11-29 18:33:11.480867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.729 [2024-11-29 18:33:11.480875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.729 [2024-11-29 18:33:11.480888] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.729 [2024-11-29 18:33:11.480896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.729 [2024-11-29 18:33:11.480905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.729 [2024-11-29 18:33:11.480912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.729 [2024-11-29 18:33:11.480920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.729 [2024-11-29 18:33:11.480927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.729 [2024-11-29 18:33:11.480937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.729 [2024-11-29 18:33:11.480950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:19:51.729 [2024-11-29 18:33:11.480960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.483475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.729 [2024-11-29 18:33:11.483525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.729 [2024-11-29 18:33:11.483538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:19:51.729 [2024-11-29 18:33:11.483548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.483669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.729 [2024-11-29 18:33:11.483680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.729 [2024-11-29 18:33:11.483689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:51.729 [2024-11-29 18:33:11.483699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.492302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.492365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.729 [2024-11-29 18:33:11.492376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.492387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.492475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.492492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.729 [2024-11-29 18:33:11.492500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.492510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.492572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.492588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.729 [2024-11-29 18:33:11.492596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.492609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.492628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.492639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.729 [2024-11-29 18:33:11.492648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.492657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.506624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.506688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.729 [2024-11-29 18:33:11.506701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.506712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.729 [2024-11-29 18:33:11.517319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.729 [2024-11-29 18:33:11.517429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.729 [2024-11-29 18:33:11.517568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.729 [2024-11-29 18:33:11.517676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.729 [2024-11-29 18:33:11.517737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.729 [2024-11-29 18:33:11.517807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.517863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.729 [2024-11-29 18:33:11.517881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.729 [2024-11-29 18:33:11.517890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.729 [2024-11-29 18:33:11.517900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.729 [2024-11-29 18:33:11.518043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.733 ms, result 0 00:19:51.729 true 00:19:51.729 18:33:11 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88405 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88405 ']' 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88405 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88405 00:19:51.729 killing process with pid 88405 00:19:51.729 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.730 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.730 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88405' 00:19:51.730 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88405 00:19:51.730 18:33:11 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88405 00:19:57.076 18:33:16 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:00.369 262144+0 records in 00:20:00.369 262144+0 records out 00:20:00.369 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.82048 s, 281 MB/s 00:20:00.369 18:33:19 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:01.746 18:33:21 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.746 [2024-11-29 18:33:21.632874] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:01.746 [2024-11-29 18:33:21.632981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88617 ] 00:20:02.008 [2024-11-29 18:33:21.787439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.008 [2024-11-29 18:33:21.807122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.008 [2024-11-29 18:33:21.900727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.008 [2024-11-29 18:33:21.900801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.270 [2024-11-29 18:33:22.061915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.061976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.270 [2024-11-29 18:33:22.061992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.270 [2024-11-29 18:33:22.062001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.062058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.062084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.270 [2024-11-29 18:33:22.062094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:02.270 [2024-11-29 18:33:22.062102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.062131] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.270 [2024-11-29 18:33:22.062548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.270 [2024-11-29 18:33:22.062602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.062611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.270 [2024-11-29 18:33:22.062625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:20:02.270 [2024-11-29 18:33:22.062633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.064426] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:02.270 [2024-11-29 18:33:22.068534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.068598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:02.270 [2024-11-29 18:33:22.068614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:20:02.270 [2024-11-29 18:33:22.068626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.068710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.068723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:02.270 [2024-11-29 18:33:22.068736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:02.270 [2024-11-29 18:33:22.068744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.077394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.077445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.270 [2024-11-29 18:33:22.077482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.604 ms 00:20:02.270 [2024-11-29 18:33:22.077490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.077603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.077614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.270 [2024-11-29 18:33:22.077623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:02.270 [2024-11-29 18:33:22.077634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.077693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.270 [2024-11-29 18:33:22.077702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.270 [2024-11-29 18:33:22.077711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.270 [2024-11-29 18:33:22.077722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.270 [2024-11-29 18:33:22.077753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.271 [2024-11-29 18:33:22.079938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.271 [2024-11-29 18:33:22.079983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.271 [2024-11-29 18:33:22.079994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:20:02.271 [2024-11-29 18:33:22.080002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.271 [2024-11-29 18:33:22.080038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.271 [2024-11-29 18:33:22.080047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.271 [2024-11-29 18:33:22.080056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:02.271 [2024-11-29 18:33:22.080067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.271 [2024-11-29 18:33:22.080093] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:02.271 [2024-11-29 18:33:22.080117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:02.271 [2024-11-29 18:33:22.080155] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:02.271 [2024-11-29 18:33:22.080172] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:02.271 [2024-11-29 18:33:22.080278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.271 [2024-11-29 18:33:22.080289] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.271 [2024-11-29 18:33:22.080303] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.271 [2024-11-29 18:33:22.080313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080322] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:02.271 [2024-11-29 18:33:22.080339] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.271 [2024-11-29 18:33:22.080347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.271 [2024-11-29 18:33:22.080354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.271 [2024-11-29 18:33:22.080362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.271 [2024-11-29 18:33:22.080374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.271 [2024-11-29 18:33:22.080382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:02.271 [2024-11-29 18:33:22.080391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.271 [2024-11-29 18:33:22.080494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.271 [2024-11-29 18:33:22.080511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.271 [2024-11-29 18:33:22.080520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:02.271 [2024-11-29 18:33:22.080527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.271 [2024-11-29 18:33:22.080637] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.271 [2024-11-29 18:33:22.080654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.271 [2024-11-29 18:33:22.080663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.271 [2024-11-29 18:33:22.080703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.271 [2024-11-29 18:33:22.080730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.271 [2024-11-29 18:33:22.080749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.271 [2024-11-29 18:33:22.080757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:02.271 [2024-11-29 18:33:22.080766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.271 [2024-11-29 18:33:22.080774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.271 [2024-11-29 18:33:22.080782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:02.271 [2024-11-29 18:33:22.080790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.271 [2024-11-29 18:33:22.080806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.271 [2024-11-29 18:33:22.080829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.271 [2024-11-29 18:33:22.080852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.271 [2024-11-29 18:33:22.080882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.271 [2024-11-29 18:33:22.080905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.271 [2024-11-29 18:33:22.080920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.271 [2024-11-29 18:33:22.080928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.271 [2024-11-29 18:33:22.080942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.271 [2024-11-29 18:33:22.080949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:02.271 [2024-11-29 18:33:22.080957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.271 [2024-11-29 18:33:22.080967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.271 [2024-11-29 18:33:22.080975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:02.271 [2024-11-29 18:33:22.080982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.080991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.271 [2024-11-29 18:33:22.080999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:02.271 [2024-11-29 18:33:22.081009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.081017] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.271 [2024-11-29 18:33:22.081029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.271 [2024-11-29 18:33:22.081038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.271 [2024-11-29 18:33:22.081047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.271 [2024-11-29 18:33:22.081056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.271 [2024-11-29 18:33:22.081064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.271 [2024-11-29 18:33:22.081072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.271 [2024-11-29 18:33:22.081080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.271 [2024-11-29 18:33:22.081086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.271 [2024-11-29 18:33:22.081093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.271 [2024-11-29 18:33:22.081102] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.271 [2024-11-29 18:33:22.081112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.271 [2024-11-29 18:33:22.081123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:02.271 [2024-11-29 18:33:22.081130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:02.271 [2024-11-29 18:33:22.081138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:02.271 [2024-11-29 18:33:22.081147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:02.271 [2024-11-29 18:33:22.081155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:02.271 [2024-11-29 18:33:22.081163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:02.271 [2024-11-29 18:33:22.081170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:02.271 [2024-11-29 18:33:22.081177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:02.271 [2024-11-29 18:33:22.081186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:02.271 [2024-11-29 18:33:22.081192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:02.271 [2024-11-29 18:33:22.081199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:02.271 [2024-11-29 18:33:22.081207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:02.271 [2024-11-29 18:33:22.081213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:02.271 [2024-11-29 18:33:22.081221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:02.272 [2024-11-29 18:33:22.081228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.272 [2024-11-29 18:33:22.081237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.272 [2024-11-29 18:33:22.081249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.272 [2024-11-29 18:33:22.081256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.272 [2024-11-29 18:33:22.081263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.272 [2024-11-29 18:33:22.081274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.272 [2024-11-29 18:33:22.081282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.081290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.272 [2024-11-29 18:33:22.081298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:20:02.272 [2024-11-29 18:33:22.081308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.095016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.095069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.272 [2024-11-29 18:33:22.095081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.656 ms 00:20:02.272 [2024-11-29 18:33:22.095089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.095172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.095182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.272 [2024-11-29 18:33:22.095190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:02.272 [2024-11-29 18:33:22.095198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.120960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.121028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.272 [2024-11-29 18:33:22.121046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:20:02.272 [2024-11-29 18:33:22.121058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.121118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.121132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.272 [2024-11-29 18:33:22.121146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.272 [2024-11-29 18:33:22.121164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.121745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.121785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.272 [2024-11-29 18:33:22.121801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:20:02.272 [2024-11-29 18:33:22.121813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.122028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.122043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.272 [2024-11-29 18:33:22.122055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:20:02.272 [2024-11-29 18:33:22.122082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.130115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.130170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.272 [2024-11-29 18:33:22.130185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.003 ms 00:20:02.272 [2024-11-29 18:33:22.130197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.133842] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:02.272 [2024-11-29 18:33:22.133893] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.272 [2024-11-29 18:33:22.133906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.133915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.272 [2024-11-29 18:33:22.133924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:20:02.272 [2024-11-29 18:33:22.133931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.149750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.149801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.272 [2024-11-29 18:33:22.149821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.761 ms 00:20:02.272 [2024-11-29 18:33:22.149829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.152678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.152719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.272 [2024-11-29 18:33:22.152730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:20:02.272 [2024-11-29 18:33:22.152737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.155290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.155336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.272 [2024-11-29 18:33:22.155346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:20:02.272 [2024-11-29 18:33:22.155353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.272 [2024-11-29 18:33:22.155724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.272 [2024-11-29 18:33:22.155744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.272 [2024-11-29 18:33:22.155754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:02.272 [2024-11-29 18:33:22.155762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.180078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.180129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.533 [2024-11-29 18:33:22.180142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.293 ms 00:20:02.533 [2024-11-29 18:33:22.180151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.188430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:02.533 [2024-11-29 18:33:22.191340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.191386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.533 [2024-11-29 18:33:22.191397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.140 ms 00:20:02.533 [2024-11-29 18:33:22.191408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.191501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.191513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.533 [2024-11-29 18:33:22.191527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:02.533 [2024-11-29 18:33:22.191535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.191600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.191610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.533 [2024-11-29 18:33:22.191622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:02.533 [2024-11-29 18:33:22.191629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.191649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.191662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.533 [2024-11-29 18:33:22.191674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.533 [2024-11-29 18:33:22.191681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.191717] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.533 [2024-11-29 18:33:22.191728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.191736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.533 [2024-11-29 18:33:22.191748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:02.533 [2024-11-29 18:33:22.191758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.196792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.196838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.533 [2024-11-29 18:33:22.196848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:20:02.533 [2024-11-29 18:33:22.196856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.196938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.533 [2024-11-29 18:33:22.196948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.533 [2024-11-29 18:33:22.196960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:02.533 [2024-11-29 18:33:22.196969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.533 [2024-11-29 18:33:22.198142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.793 ms, result 0 00:20:03.479  [2024-11-29T18:33:24.328Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-29T18:33:25.276Z] Copying: 35/1024 [MB] (24 MBps) [2024-11-29T18:33:26.225Z] Copying: 50/1024 [MB] (15 MBps) [2024-11-29T18:33:27.611Z] Copying: 65/1024 [MB] (14 MBps) [2024-11-29T18:33:28.555Z] Copying: 77/1024 [MB] (12 MBps) [2024-11-29T18:33:29.500Z] Copying: 93/1024 [MB] (15 MBps) [2024-11-29T18:33:30.442Z] Copying: 107/1024 [MB] (14 MBps) [2024-11-29T18:33:31.385Z] Copying: 139/1024 [MB] (32 MBps) [2024-11-29T18:33:32.329Z] Copying: 168/1024 [MB] (29 MBps) [2024-11-29T18:33:33.272Z] Copying: 186/1024 [MB] (17 MBps) [2024-11-29T18:33:34.217Z] Copying: 202/1024 [MB] (16 MBps) [2024-11-29T18:33:35.603Z] Copying: 216/1024 [MB] (13 MBps) [2024-11-29T18:33:36.547Z] Copying: 230/1024 [MB] (13 MBps) [2024-11-29T18:33:37.492Z] Copying: 249/1024 [MB] (19 MBps) [2024-11-29T18:33:38.436Z] Copying: 264/1024 [MB] (15 MBps) [2024-11-29T18:33:39.379Z] Copying: 281/1024 [MB] (17 MBps) [2024-11-29T18:33:40.323Z] Copying: 302/1024 [MB] (20 MBps) [2024-11-29T18:33:41.265Z] Copying: 326/1024 [MB] (23 MBps) [2024-11-29T18:33:42.652Z] Copying: 354/1024 [MB] (28 MBps) [2024-11-29T18:33:43.225Z] Copying: 370/1024 [MB] (15 MBps) [2024-11-29T18:33:44.613Z] Copying: 388/1024 [MB] (17 MBps) [2024-11-29T18:33:45.553Z] Copying: 403/1024 [MB] (15 MBps) [2024-11-29T18:33:46.493Z] Copying: 440/1024 [MB] (37 MBps) [2024-11-29T18:33:47.434Z] Copying: 457/1024 [MB] (17 MBps) [2024-11-29T18:33:48.377Z] Copying: 486/1024 [MB] (29 MBps) [2024-11-29T18:33:49.343Z] Copying: 511/1024 [MB] (24 MBps) [2024-11-29T18:33:50.338Z] Copying: 534/1024 [MB] (22 MBps) [2024-11-29T18:33:51.282Z] Copying: 549/1024 [MB] (15 MBps) [2024-11-29T18:33:52.228Z] Copying: 566/1024 [MB] (16 MBps) [2024-11-29T18:33:53.619Z] Copying: 587/1024 [MB] (21 MBps) [2024-11-29T18:33:54.563Z] Copying: 600/1024 [MB] (12 MBps) [2024-11-29T18:33:55.505Z] Copying: 636/1024 [MB] (35 MBps) [2024-11-29T18:33:56.464Z] Copying: 654/1024 [MB] (18 MBps) [2024-11-29T18:33:57.409Z] Copying: 675/1024 [MB] (21 MBps) [2024-11-29T18:33:58.355Z] Copying: 696/1024 [MB] (21 MBps) [2024-11-29T18:33:59.299Z] Copying: 729/1024 [MB] (32 MBps) [2024-11-29T18:34:00.246Z] Copying: 749/1024 [MB] (20 MBps) [2024-11-29T18:34:01.633Z] Copying: 766/1024 [MB] (16 MBps) [2024-11-29T18:34:02.576Z] Copying: 792/1024 [MB] (26 MBps) [2024-11-29T18:34:03.521Z] Copying: 819/1024 [MB] (26 MBps) [2024-11-29T18:34:04.465Z] Copying: 835/1024 [MB] (16 MBps) [2024-11-29T18:34:05.410Z] Copying: 855/1024 [MB] (19 MBps) [2024-11-29T18:34:06.355Z] Copying: 872/1024 [MB] (16 MBps) [2024-11-29T18:34:07.300Z] Copying: 890/1024 [MB] (18 MBps) [2024-11-29T18:34:08.250Z] Copying: 905/1024 [MB] (14 MBps) [2024-11-29T18:34:09.639Z] Copying: 923/1024 [MB] (18 MBps) [2024-11-29T18:34:10.214Z] Copying: 951/1024 [MB] (28 MBps) [2024-11-29T18:34:11.602Z] Copying: 980/1024 [MB] (28 MBps) [2024-11-29T18:34:12.547Z] Copying: 996/1024 [MB] (16 MBps) [2024-11-29T18:34:12.547Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-29T18:34:12.547Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-29 18:34:12.508351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.508394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.642 [2024-11-29 18:34:12.508407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.642 [2024-11-29 18:34:12.508419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.508439] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.642 [2024-11-29 18:34:12.508905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.508923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.642 [2024-11-29 18:34:12.508932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:52.642 [2024-11-29 18:34:12.508939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.510797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.510829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.642 [2024-11-29 18:34:12.510839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:20:52.642 [2024-11-29 18:34:12.510846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.526059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.526095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.642 [2024-11-29 18:34:12.526105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.185 ms 00:20:52.642 [2024-11-29 18:34:12.526113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.532256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.532281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.642 [2024-11-29 18:34:12.532291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:20:52.642 [2024-11-29 18:34:12.532298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.534470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.534500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.642 [2024-11-29 18:34:12.534508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:20:52.642 [2024-11-29 18:34:12.534515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.538038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.538090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.642 [2024-11-29 18:34:12.538100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:20:52.642 [2024-11-29 18:34:12.538107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.538219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.538229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.642 [2024-11-29 18:34:12.538236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:52.642 [2024-11-29 18:34:12.538256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.540766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.540799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.642 [2024-11-29 18:34:12.540808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:20:52.642 [2024-11-29 18:34:12.540814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.542756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.542788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.642 [2024-11-29 18:34:12.542796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:20:52.642 [2024-11-29 18:34:12.542802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-11-29 18:34:12.544558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-11-29 18:34:12.544586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.642 [2024-11-29 18:34:12.544594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:20:52.642 [2024-11-29 18:34:12.544600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.906 [2024-11-29 18:34:12.546723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.906 [2024-11-29 18:34:12.546755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.906 [2024-11-29 18:34:12.546763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:20:52.906 [2024-11-29 18:34:12.546770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.906 [2024-11-29 18:34:12.546797] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.906 [2024-11-29 18:34:12.546817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.906 [2024-11-29 18:34:12.546995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.907 [2024-11-29 18:34:12.547552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.907 [2024-11-29 18:34:12.547559] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:20:52.907 [2024-11-29 18:34:12.547567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.907 [2024-11-29 18:34:12.547574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.907 [2024-11-29 18:34:12.547584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.907 [2024-11-29 18:34:12.547591] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.907 [2024-11-29 18:34:12.547601] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.907 [2024-11-29 18:34:12.547608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.907 [2024-11-29 18:34:12.547615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.907 [2024-11-29 18:34:12.547621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.907 [2024-11-29 18:34:12.547632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.907 [2024-11-29 18:34:12.547639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-11-29 18:34:12.547651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.907 [2024-11-29 18:34:12.547659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:20:52.907 [2024-11-29 18:34:12.547667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-11-29 18:34:12.549106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-11-29 18:34:12.549130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.907 [2024-11-29 18:34:12.549138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:20:52.907 [2024-11-29 18:34:12.549146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.549224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.908 [2024-11-29 18:34:12.549232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.908 [2024-11-29 18:34:12.549244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:52.908 [2024-11-29 18:34:12.549251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.554217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.554252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.908 [2024-11-29 18:34:12.554261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.554268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.554328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.554336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.908 [2024-11-29 18:34:12.554343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.554351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.554403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.554413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.908 [2024-11-29 18:34:12.554420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.554426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.554440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.554450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.908 [2024-11-29 18:34:12.554470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.554477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.563465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.563504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.908 [2024-11-29 18:34:12.563520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.563531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.570723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.570770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.908 [2024-11-29 18:34:12.570781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.570788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.570837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.570846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.908 [2024-11-29 18:34:12.570854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.570861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.570897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.570906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.908 [2024-11-29 18:34:12.570918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.570930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.570991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.571001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.908 [2024-11-29 18:34:12.571009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.571016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.571041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.571050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.908 [2024-11-29 18:34:12.571057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.571068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.571105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.571113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.908 [2024-11-29 18:34:12.571120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.571127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.571165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.908 [2024-11-29 18:34:12.571173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.908 [2024-11-29 18:34:12.571186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.908 [2024-11-29 18:34:12.571196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.908 [2024-11-29 18:34:12.571307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.934 ms, result 0 00:20:53.169 00:20:53.169 00:20:53.169 18:34:12 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:53.169 [2024-11-29 18:34:12.979247] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:53.169 [2024-11-29 18:34:12.979404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89146 ] 00:20:53.430 [2024-11-29 18:34:13.143940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.430 [2024-11-29 18:34:13.172565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.430 [2024-11-29 18:34:13.291113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.430 [2024-11-29 18:34:13.291196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.694 [2024-11-29 18:34:13.452318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.452380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:53.694 [2024-11-29 18:34:13.452395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:53.694 [2024-11-29 18:34:13.452404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.452489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.452501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.694 [2024-11-29 18:34:13.452511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:53.694 [2024-11-29 18:34:13.452519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.452549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:53.694 [2024-11-29 18:34:13.453381] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:53.694 [2024-11-29 18:34:13.453450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.453481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.694 [2024-11-29 18:34:13.453495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:20:53.694 [2024-11-29 18:34:13.453504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.455293] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:53.694 [2024-11-29 18:34:13.458981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.459036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:53.694 [2024-11-29 18:34:13.459054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:20:53.694 [2024-11-29 18:34:13.459066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.459138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.459151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:53.694 [2024-11-29 18:34:13.459161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:53.694 [2024-11-29 18:34:13.459169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.467225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.467271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.694 [2024-11-29 18:34:13.467287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.012 ms 00:20:53.694 [2024-11-29 18:34:13.467295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.467397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.467407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.694 [2024-11-29 18:34:13.467419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:53.694 [2024-11-29 18:34:13.467430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.467504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.467516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:53.694 [2024-11-29 18:34:13.467524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:53.694 [2024-11-29 18:34:13.467537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.467567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:53.694 [2024-11-29 18:34:13.469793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.469828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.694 [2024-11-29 18:34:13.469847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:20:53.694 [2024-11-29 18:34:13.469860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.469895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.469904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:53.694 [2024-11-29 18:34:13.469914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:53.694 [2024-11-29 18:34:13.469926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.469952] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:53.694 [2024-11-29 18:34:13.469973] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:53.694 [2024-11-29 18:34:13.470016] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:53.694 [2024-11-29 18:34:13.470034] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:53.694 [2024-11-29 18:34:13.470152] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:53.694 [2024-11-29 18:34:13.470169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:53.694 [2024-11-29 18:34:13.470184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:53.694 [2024-11-29 18:34:13.470196] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:53.694 [2024-11-29 18:34:13.470207] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:53.694 [2024-11-29 18:34:13.470217] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:53.694 [2024-11-29 18:34:13.470226] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:53.694 [2024-11-29 18:34:13.470237] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:53.694 [2024-11-29 18:34:13.470244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:53.694 [2024-11-29 18:34:13.470252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.470259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:53.694 [2024-11-29 18:34:13.470268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:53.694 [2024-11-29 18:34:13.470278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.470363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.694 [2024-11-29 18:34:13.470383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:53.694 [2024-11-29 18:34:13.470391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:53.694 [2024-11-29 18:34:13.470399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.694 [2024-11-29 18:34:13.470517] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:53.694 [2024-11-29 18:34:13.470534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:53.694 [2024-11-29 18:34:13.470543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.694 [2024-11-29 18:34:13.470560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.694 [2024-11-29 18:34:13.470568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:53.694 [2024-11-29 18:34:13.470575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:53.694 [2024-11-29 18:34:13.470583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:53.694 [2024-11-29 18:34:13.470590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:53.694 [2024-11-29 18:34:13.470597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:53.694 [2024-11-29 18:34:13.470604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.695 [2024-11-29 18:34:13.470615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:53.695 [2024-11-29 18:34:13.470622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:53.695 [2024-11-29 18:34:13.470629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.695 [2024-11-29 18:34:13.470636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:53.695 [2024-11-29 18:34:13.470643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:53.695 [2024-11-29 18:34:13.470650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:53.695 [2024-11-29 18:34:13.470665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:53.695 [2024-11-29 18:34:13.470686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:53.695 [2024-11-29 18:34:13.470707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:53.695 [2024-11-29 18:34:13.470732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:53.695 [2024-11-29 18:34:13.470753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:53.695 [2024-11-29 18:34:13.470773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.695 [2024-11-29 18:34:13.470789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:53.695 [2024-11-29 18:34:13.470796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:53.695 [2024-11-29 18:34:13.470803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.695 [2024-11-29 18:34:13.470809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:53.695 [2024-11-29 18:34:13.470816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:53.695 [2024-11-29 18:34:13.470823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:53.695 [2024-11-29 18:34:13.470837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:53.695 [2024-11-29 18:34:13.470847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470853] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:53.695 [2024-11-29 18:34:13.470868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:53.695 [2024-11-29 18:34:13.470875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.695 [2024-11-29 18:34:13.470890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:53.695 [2024-11-29 18:34:13.470897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:53.695 [2024-11-29 18:34:13.470904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:53.695 [2024-11-29 18:34:13.470911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:53.695 [2024-11-29 18:34:13.470917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:53.695 [2024-11-29 18:34:13.470924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:53.695 [2024-11-29 18:34:13.470932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:53.695 [2024-11-29 18:34:13.470943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.470955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:53.695 [2024-11-29 18:34:13.470963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:53.695 [2024-11-29 18:34:13.470970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:53.695 [2024-11-29 18:34:13.470980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:53.695 [2024-11-29 18:34:13.470988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:53.695 [2024-11-29 18:34:13.470996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:53.695 [2024-11-29 18:34:13.471003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:53.695 [2024-11-29 18:34:13.471011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:53.695 [2024-11-29 18:34:13.471018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:53.695 [2024-11-29 18:34:13.471025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:53.695 [2024-11-29 18:34:13.471062] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:53.695 [2024-11-29 18:34:13.471075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:53.695 [2024-11-29 18:34:13.471091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:53.695 [2024-11-29 18:34:13.471100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:53.695 [2024-11-29 18:34:13.471110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:53.695 [2024-11-29 18:34:13.471118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.471130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:53.695 [2024-11-29 18:34:13.471137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:20:53.695 [2024-11-29 18:34:13.471151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.485112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.485166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:53.695 [2024-11-29 18:34:13.485177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.910 ms 00:20:53.695 [2024-11-29 18:34:13.485185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.485270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.485279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:53.695 [2024-11-29 18:34:13.485287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:53.695 [2024-11-29 18:34:13.485301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.508362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.508425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:53.695 [2024-11-29 18:34:13.508474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.002 ms 00:20:53.695 [2024-11-29 18:34:13.508486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.508540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.508552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:53.695 [2024-11-29 18:34:13.508563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:53.695 [2024-11-29 18:34:13.508577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.509197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.509245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:53.695 [2024-11-29 18:34:13.509259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:53.695 [2024-11-29 18:34:13.509280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.509474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.509488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:53.695 [2024-11-29 18:34:13.509499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:20:53.695 [2024-11-29 18:34:13.509508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.517725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.517769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:53.695 [2024-11-29 18:34:13.517779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.191 ms 00:20:53.695 [2024-11-29 18:34:13.517787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.695 [2024-11-29 18:34:13.521609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:53.695 [2024-11-29 18:34:13.521659] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:53.695 [2024-11-29 18:34:13.521676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.695 [2024-11-29 18:34:13.521684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:53.696 [2024-11-29 18:34:13.521693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:20:53.696 [2024-11-29 18:34:13.521700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.537821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.537867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:53.696 [2024-11-29 18:34:13.537886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.060 ms 00:20:53.696 [2024-11-29 18:34:13.537894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.541033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.541078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:53.696 [2024-11-29 18:34:13.541089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:20:53.696 [2024-11-29 18:34:13.541097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.543922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.544100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:53.696 [2024-11-29 18:34:13.544118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:20:53.696 [2024-11-29 18:34:13.544126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.544537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.544561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:53.696 [2024-11-29 18:34:13.544571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:20:53.696 [2024-11-29 18:34:13.544584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.570017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.570096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:53.696 [2024-11-29 18:34:13.570113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.410 ms 00:20:53.696 [2024-11-29 18:34:13.570121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.578260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:53.696 [2024-11-29 18:34:13.581390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.581432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:53.696 [2024-11-29 18:34:13.581444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.217 ms 00:20:53.696 [2024-11-29 18:34:13.581493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.581579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.581592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:53.696 [2024-11-29 18:34:13.581602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:53.696 [2024-11-29 18:34:13.581610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.581680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.581695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:53.696 [2024-11-29 18:34:13.581704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:53.696 [2024-11-29 18:34:13.581712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.581738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.581747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:53.696 [2024-11-29 18:34:13.581755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:53.696 [2024-11-29 18:34:13.581763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.581801] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:53.696 [2024-11-29 18:34:13.581816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.581828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:53.696 [2024-11-29 18:34:13.581839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:53.696 [2024-11-29 18:34:13.581846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.586867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.586913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:53.696 [2024-11-29 18:34:13.586924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.002 ms 00:20:53.696 [2024-11-29 18:34:13.586932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.587019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.696 [2024-11-29 18:34:13.587034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:53.696 [2024-11-29 18:34:13.587046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:53.696 [2024-11-29 18:34:13.587057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.696 [2024-11-29 18:34:13.588243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.476 ms, result 0 00:20:55.085  [2024-11-29T18:34:15.933Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-29T18:34:16.878Z] Copying: 36/1024 [MB] (20 MBps) [2024-11-29T18:34:17.822Z] Copying: 57/1024 [MB] (21 MBps) [2024-11-29T18:34:19.207Z] Copying: 81/1024 [MB] (23 MBps) [2024-11-29T18:34:19.779Z] Copying: 102/1024 [MB] (21 MBps) [2024-11-29T18:34:20.768Z] Copying: 123/1024 [MB] (20 MBps) [2024-11-29T18:34:22.153Z] Copying: 139/1024 [MB] (16 MBps) [2024-11-29T18:34:23.095Z] Copying: 162/1024 [MB] (23 MBps) [2024-11-29T18:34:24.037Z] Copying: 184/1024 [MB] (22 MBps) [2024-11-29T18:34:24.983Z] Copying: 199/1024 [MB] (14 MBps) [2024-11-29T18:34:25.951Z] Copying: 214/1024 [MB] (14 MBps) [2024-11-29T18:34:26.896Z] Copying: 236/1024 [MB] (21 MBps) [2024-11-29T18:34:27.840Z] Copying: 252/1024 [MB] (16 MBps) [2024-11-29T18:34:28.783Z] Copying: 273/1024 [MB] (20 MBps) [2024-11-29T18:34:30.162Z] Copying: 295/1024 [MB] (22 MBps) [2024-11-29T18:34:31.107Z] Copying: 314/1024 [MB] (19 MBps) [2024-11-29T18:34:32.051Z] Copying: 335/1024 [MB] (20 MBps) [2024-11-29T18:34:32.996Z] Copying: 350/1024 [MB] (15 MBps) [2024-11-29T18:34:33.941Z] Copying: 362/1024 [MB] (12 MBps) [2024-11-29T18:34:34.883Z] Copying: 374/1024 [MB] (12 MBps) [2024-11-29T18:34:35.824Z] Copying: 390/1024 [MB] (16 MBps) [2024-11-29T18:34:36.768Z] Copying: 409/1024 [MB] (18 MBps) [2024-11-29T18:34:38.159Z] Copying: 422/1024 [MB] (12 MBps) [2024-11-29T18:34:39.103Z] Copying: 441/1024 [MB] (19 MBps) [2024-11-29T18:34:40.048Z] Copying: 460/1024 [MB] (18 MBps) [2024-11-29T18:34:40.988Z] Copying: 480/1024 [MB] (20 MBps) [2024-11-29T18:34:41.932Z] Copying: 500/1024 [MB] (20 MBps) [2024-11-29T18:34:42.877Z] Copying: 516/1024 [MB] (16 MBps) [2024-11-29T18:34:43.823Z] Copying: 534/1024 [MB] (17 MBps) [2024-11-29T18:34:44.771Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-29T18:34:46.156Z] Copying: 564/1024 [MB] (18 MBps) [2024-11-29T18:34:47.099Z] Copying: 579/1024 [MB] (14 MBps) [2024-11-29T18:34:48.043Z] Copying: 591/1024 [MB] (12 MBps) [2024-11-29T18:34:48.987Z] Copying: 614/1024 [MB] (22 MBps) [2024-11-29T18:34:49.929Z] Copying: 634/1024 [MB] (19 MBps) [2024-11-29T18:34:50.874Z] Copying: 654/1024 [MB] (19 MBps) [2024-11-29T18:34:51.817Z] Copying: 668/1024 [MB] (14 MBps) [2024-11-29T18:34:52.810Z] Copying: 684/1024 [MB] (15 MBps) [2024-11-29T18:34:53.782Z] Copying: 701/1024 [MB] (17 MBps) [2024-11-29T18:34:55.170Z] Copying: 723/1024 [MB] (21 MBps) [2024-11-29T18:34:56.111Z] Copying: 739/1024 [MB] (16 MBps) [2024-11-29T18:34:57.055Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-29T18:34:58.001Z] Copying: 783/1024 [MB] (21 MBps) [2024-11-29T18:34:58.946Z] Copying: 798/1024 [MB] (15 MBps) [2024-11-29T18:34:59.891Z] Copying: 821/1024 [MB] (22 MBps) [2024-11-29T18:35:00.836Z] Copying: 842/1024 [MB] (21 MBps) [2024-11-29T18:35:01.782Z] Copying: 862/1024 [MB] (19 MBps) [2024-11-29T18:35:03.170Z] Copying: 880/1024 [MB] (17 MBps) [2024-11-29T18:35:04.113Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-29T18:35:05.059Z] Copying: 911/1024 [MB] (19 MBps) [2024-11-29T18:35:06.002Z] Copying: 935/1024 [MB] (24 MBps) [2024-11-29T18:35:06.947Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-29T18:35:07.923Z] Copying: 972/1024 [MB] (20 MBps) [2024-11-29T18:35:08.868Z] Copying: 986/1024 [MB] (14 MBps) [2024-11-29T18:35:09.812Z] Copying: 1003/1024 [MB] (16 MBps) [2024-11-29T18:35:10.386Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-29T18:35:10.958Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 18:35:10.683624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.683694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.053 [2024-11-29 18:35:10.683718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:51.053 [2024-11-29 18:35:10.683727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.683753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.053 [2024-11-29 18:35:10.684600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.684636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.053 [2024-11-29 18:35:10.684649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:21:51.053 [2024-11-29 18:35:10.684671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.684916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.684928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.053 [2024-11-29 18:35:10.684937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:21:51.053 [2024-11-29 18:35:10.684949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.688898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.688921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.053 [2024-11-29 18:35:10.688932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:21:51.053 [2024-11-29 18:35:10.688941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.695694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.695732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:51.053 [2024-11-29 18:35:10.695743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.734 ms 00:21:51.053 [2024-11-29 18:35:10.695759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.698755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.698802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.053 [2024-11-29 18:35:10.698813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:21:51.053 [2024-11-29 18:35:10.698820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.704436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.704492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.053 [2024-11-29 18:35:10.704504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.571 ms 00:21:51.053 [2024-11-29 18:35:10.704512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.704639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.704660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.053 [2024-11-29 18:35:10.704677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:51.053 [2024-11-29 18:35:10.704688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.708164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.708207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:51.053 [2024-11-29 18:35:10.708217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:21:51.053 [2024-11-29 18:35:10.708225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.711262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.711306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:51.053 [2024-11-29 18:35:10.711316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:21:51.053 [2024-11-29 18:35:10.711323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.714372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.714433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.053 [2024-11-29 18:35:10.714447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:21:51.053 [2024-11-29 18:35:10.714471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.716748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.053 [2024-11-29 18:35:10.716792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.053 [2024-11-29 18:35:10.716802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:21:51.053 [2024-11-29 18:35:10.716811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.053 [2024-11-29 18:35:10.716851] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.053 [2024-11-29 18:35:10.716868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.053 [2024-11-29 18:35:10.716923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.716995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.717995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.054 [2024-11-29 18:35:10.718182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.054 [2024-11-29 18:35:10.718191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:21:51.054 [2024-11-29 18:35:10.718199] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:51.054 [2024-11-29 18:35:10.718214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:51.054 [2024-11-29 18:35:10.718222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:51.054 [2024-11-29 18:35:10.718230] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:51.054 [2024-11-29 18:35:10.718238] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.054 [2024-11-29 18:35:10.718246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.054 [2024-11-29 18:35:10.718267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.054 [2024-11-29 18:35:10.718281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.054 [2024-11-29 18:35:10.718288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.054 [2024-11-29 18:35:10.718296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.054 [2024-11-29 18:35:10.718305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.054 [2024-11-29 18:35:10.718314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:21:51.054 [2024-11-29 18:35:10.718322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.720700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.054 [2024-11-29 18:35:10.720743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.054 [2024-11-29 18:35:10.720755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:21:51.054 [2024-11-29 18:35:10.720765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.720890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.054 [2024-11-29 18:35:10.720901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.054 [2024-11-29 18:35:10.720915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:51.054 [2024-11-29 18:35:10.720928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.728882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.728928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.054 [2024-11-29 18:35:10.728939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.728952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.729014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.729023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.054 [2024-11-29 18:35:10.729031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.729039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.729101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.729112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.054 [2024-11-29 18:35:10.729121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.729129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.729147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.729157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.054 [2024-11-29 18:35:10.729165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.729173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.743571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.743623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.054 [2024-11-29 18:35:10.743636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.743644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.754668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.754723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.054 [2024-11-29 18:35:10.754734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.754753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.754805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.754816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.054 [2024-11-29 18:35:10.754825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.754833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.754870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.754884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.054 [2024-11-29 18:35:10.754893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.754900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.754971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.754982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.054 [2024-11-29 18:35:10.754991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.754999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.755027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.755037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.054 [2024-11-29 18:35:10.755052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.755064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.755104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.755114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.054 [2024-11-29 18:35:10.755122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.755134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.755182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.054 [2024-11-29 18:35:10.755205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.054 [2024-11-29 18:35:10.755215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.054 [2024-11-29 18:35:10.755223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.054 [2024-11-29 18:35:10.755364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.703 ms, result 0 00:21:51.054 00:21:51.054 00:21:51.315 18:35:10 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:53.229 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:53.229 18:35:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:53.490 [2024-11-29 18:35:13.133188] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:21:53.491 [2024-11-29 18:35:13.133288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89770 ] 00:21:53.491 [2024-11-29 18:35:13.285240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.491 [2024-11-29 18:35:13.305464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.753 [2024-11-29 18:35:13.396228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:53.753 [2024-11-29 18:35:13.396302] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:53.753 [2024-11-29 18:35:13.553694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.553745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:53.753 [2024-11-29 18:35:13.553758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:53.753 [2024-11-29 18:35:13.553766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.553818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.553829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.753 [2024-11-29 18:35:13.553837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:53.753 [2024-11-29 18:35:13.553845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.553867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:53.753 [2024-11-29 18:35:13.554399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:53.753 [2024-11-29 18:35:13.554443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.554470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.753 [2024-11-29 18:35:13.554484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:21:53.753 [2024-11-29 18:35:13.554492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.555669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:53.753 [2024-11-29 18:35:13.558435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.558487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:53.753 [2024-11-29 18:35:13.558502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:21:53.753 [2024-11-29 18:35:13.558513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.558564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.558576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:53.753 [2024-11-29 18:35:13.558584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:53.753 [2024-11-29 18:35:13.558597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.563923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.563956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.753 [2024-11-29 18:35:13.563971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.266 ms 00:21:53.753 [2024-11-29 18:35:13.563978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.564064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.564077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.753 [2024-11-29 18:35:13.564085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:53.753 [2024-11-29 18:35:13.564098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.564134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.564143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:53.753 [2024-11-29 18:35:13.564156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:53.753 [2024-11-29 18:35:13.564165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.564188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:53.753 [2024-11-29 18:35:13.565656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.565681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.753 [2024-11-29 18:35:13.565691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:21:53.753 [2024-11-29 18:35:13.565698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.565726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.565740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:53.753 [2024-11-29 18:35:13.565748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:53.753 [2024-11-29 18:35:13.565757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.753 [2024-11-29 18:35:13.565776] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:53.753 [2024-11-29 18:35:13.565796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:53.753 [2024-11-29 18:35:13.565833] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:53.753 [2024-11-29 18:35:13.565848] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:53.753 [2024-11-29 18:35:13.565949] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:53.753 [2024-11-29 18:35:13.565960] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:53.753 [2024-11-29 18:35:13.565976] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:53.753 [2024-11-29 18:35:13.565985] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:53.753 [2024-11-29 18:35:13.565994] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:53.753 [2024-11-29 18:35:13.566001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:53.753 [2024-11-29 18:35:13.566013] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:53.753 [2024-11-29 18:35:13.566020] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:53.753 [2024-11-29 18:35:13.566026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:53.753 [2024-11-29 18:35:13.566037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.753 [2024-11-29 18:35:13.566045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:53.753 [2024-11-29 18:35:13.566074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:53.753 [2024-11-29 18:35:13.566081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.566171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.566179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:53.754 [2024-11-29 18:35:13.566187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:53.754 [2024-11-29 18:35:13.566197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.566299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:53.754 [2024-11-29 18:35:13.566309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:53.754 [2024-11-29 18:35:13.566318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:53.754 [2024-11-29 18:35:13.566348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:53.754 [2024-11-29 18:35:13.566371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:53.754 [2024-11-29 18:35:13.566386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:53.754 [2024-11-29 18:35:13.566398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:53.754 [2024-11-29 18:35:13.566405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:53.754 [2024-11-29 18:35:13.566412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:53.754 [2024-11-29 18:35:13.566420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:53.754 [2024-11-29 18:35:13.566427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:53.754 [2024-11-29 18:35:13.566442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:53.754 [2024-11-29 18:35:13.566502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:53.754 [2024-11-29 18:35:13.566525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:53.754 [2024-11-29 18:35:13.566548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:53.754 [2024-11-29 18:35:13.566574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:53.754 [2024-11-29 18:35:13.566597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:53.754 [2024-11-29 18:35:13.566611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:53.754 [2024-11-29 18:35:13.566620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:53.754 [2024-11-29 18:35:13.566627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:53.754 [2024-11-29 18:35:13.566634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:53.754 [2024-11-29 18:35:13.566642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:53.754 [2024-11-29 18:35:13.566649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:53.754 [2024-11-29 18:35:13.566664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:53.754 [2024-11-29 18:35:13.566671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566681] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:53.754 [2024-11-29 18:35:13.566691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:53.754 [2024-11-29 18:35:13.566699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.754 [2024-11-29 18:35:13.566716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:53.754 [2024-11-29 18:35:13.566724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:53.754 [2024-11-29 18:35:13.566731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:53.754 [2024-11-29 18:35:13.566739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:53.754 [2024-11-29 18:35:13.566747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:53.754 [2024-11-29 18:35:13.566756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:53.754 [2024-11-29 18:35:13.566766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:53.754 [2024-11-29 18:35:13.566776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:53.754 [2024-11-29 18:35:13.566794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:53.754 [2024-11-29 18:35:13.566801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:53.754 [2024-11-29 18:35:13.566810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:53.754 [2024-11-29 18:35:13.566819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:53.754 [2024-11-29 18:35:13.566826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:53.754 [2024-11-29 18:35:13.566832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:53.754 [2024-11-29 18:35:13.566840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:53.754 [2024-11-29 18:35:13.566846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:53.754 [2024-11-29 18:35:13.566853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:53.754 [2024-11-29 18:35:13.566887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:53.754 [2024-11-29 18:35:13.566895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:53.754 [2024-11-29 18:35:13.566910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:53.754 [2024-11-29 18:35:13.566917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:53.754 [2024-11-29 18:35:13.566924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:53.754 [2024-11-29 18:35:13.566933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.566940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:53.754 [2024-11-29 18:35:13.566952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:21:53.754 [2024-11-29 18:35:13.566961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.576612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.576653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.754 [2024-11-29 18:35:13.576662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.601 ms 00:21:53.754 [2024-11-29 18:35:13.576670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.576749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.576763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:53.754 [2024-11-29 18:35:13.576774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:53.754 [2024-11-29 18:35:13.576781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.598064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.598124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.754 [2024-11-29 18:35:13.598142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.240 ms 00:21:53.754 [2024-11-29 18:35:13.598154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.598210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.754 [2024-11-29 18:35:13.598226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.754 [2024-11-29 18:35:13.598250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:53.754 [2024-11-29 18:35:13.598261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.754 [2024-11-29 18:35:13.598741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.598792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.755 [2024-11-29 18:35:13.598808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:53.755 [2024-11-29 18:35:13.598822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.599016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.599031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.755 [2024-11-29 18:35:13.599050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:21:53.755 [2024-11-29 18:35:13.599063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.605227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.605262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.755 [2024-11-29 18:35:13.605270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:21:53.755 [2024-11-29 18:35:13.605277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.608128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:53.755 [2024-11-29 18:35:13.608169] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:53.755 [2024-11-29 18:35:13.608183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.608190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:53.755 [2024-11-29 18:35:13.608199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:21:53.755 [2024-11-29 18:35:13.608207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.623160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.623204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:53.755 [2024-11-29 18:35:13.623215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.915 ms 00:21:53.755 [2024-11-29 18:35:13.623223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.625362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.625396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:53.755 [2024-11-29 18:35:13.625404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:21:53.755 [2024-11-29 18:35:13.625411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.628739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.628843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:53.755 [2024-11-29 18:35:13.628874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:21:53.755 [2024-11-29 18:35:13.628895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.755 [2024-11-29 18:35:13.629907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.755 [2024-11-29 18:35:13.629966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.755 [2024-11-29 18:35:13.629993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:21:53.755 [2024-11-29 18:35:13.630025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.016 [2024-11-29 18:35:13.656300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.016 [2024-11-29 18:35:13.656349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:54.016 [2024-11-29 18:35:13.656363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:21:54.016 [2024-11-29 18:35:13.656371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.016 [2024-11-29 18:35:13.664336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:54.016 [2024-11-29 18:35:13.667374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.016 [2024-11-29 18:35:13.667407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:54.016 [2024-11-29 18:35:13.667418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.956 ms 00:21:54.017 [2024-11-29 18:35:13.667435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.667528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.667540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:54.017 [2024-11-29 18:35:13.667550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:54.017 [2024-11-29 18:35:13.667558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.667634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.667649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:54.017 [2024-11-29 18:35:13.667663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:54.017 [2024-11-29 18:35:13.667670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.667690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.667700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:54.017 [2024-11-29 18:35:13.667708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:54.017 [2024-11-29 18:35:13.667717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.667753] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:54.017 [2024-11-29 18:35:13.667766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.667779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:54.017 [2024-11-29 18:35:13.667793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:54.017 [2024-11-29 18:35:13.667801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.672832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.672872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:54.017 [2024-11-29 18:35:13.672892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.013 ms 00:21:54.017 [2024-11-29 18:35:13.672901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.672984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.017 [2024-11-29 18:35:13.672993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:54.017 [2024-11-29 18:35:13.673002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:54.017 [2024-11-29 18:35:13.673017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.017 [2024-11-29 18:35:13.674181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.991 ms, result 0 00:21:54.962  [2024-11-29T18:35:15.809Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-29T18:35:16.753Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-29T18:35:17.696Z] Copying: 55/1024 [MB] (24 MBps) [2024-11-29T18:35:19.084Z] Copying: 68/1024 [MB] (13 MBps) [2024-11-29T18:35:20.028Z] Copying: 80/1024 [MB] (12 MBps) [2024-11-29T18:35:20.974Z] Copying: 102/1024 [MB] (21 MBps) [2024-11-29T18:35:21.917Z] Copying: 113/1024 [MB] (11 MBps) [2024-11-29T18:35:22.860Z] Copying: 130/1024 [MB] (16 MBps) [2024-11-29T18:35:23.843Z] Copying: 148/1024 [MB] (18 MBps) [2024-11-29T18:35:24.801Z] Copying: 164/1024 [MB] (16 MBps) [2024-11-29T18:35:25.742Z] Copying: 183/1024 [MB] (18 MBps) [2024-11-29T18:35:26.685Z] Copying: 200/1024 [MB] (16 MBps) [2024-11-29T18:35:28.072Z] Copying: 224/1024 [MB] (24 MBps) [2024-11-29T18:35:29.015Z] Copying: 256/1024 [MB] (32 MBps) [2024-11-29T18:35:29.961Z] Copying: 288/1024 [MB] (31 MBps) [2024-11-29T18:35:30.905Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-29T18:35:31.848Z] Copying: 328/1024 [MB] (21 MBps) [2024-11-29T18:35:32.792Z] Copying: 351/1024 [MB] (23 MBps) [2024-11-29T18:35:33.735Z] Copying: 378/1024 [MB] (26 MBps) [2024-11-29T18:35:35.122Z] Copying: 397/1024 [MB] (19 MBps) [2024-11-29T18:35:35.695Z] Copying: 429/1024 [MB] (31 MBps) [2024-11-29T18:35:37.090Z] Copying: 444/1024 [MB] (14 MBps) [2024-11-29T18:35:38.031Z] Copying: 479/1024 [MB] (35 MBps) [2024-11-29T18:35:38.976Z] Copying: 517/1024 [MB] (38 MBps) [2024-11-29T18:35:39.923Z] Copying: 555/1024 [MB] (37 MBps) [2024-11-29T18:35:40.866Z] Copying: 581/1024 [MB] (25 MBps) [2024-11-29T18:35:41.808Z] Copying: 601/1024 [MB] (20 MBps) [2024-11-29T18:35:42.750Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-29T18:35:43.694Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-29T18:35:45.081Z] Copying: 651/1024 [MB] (21 MBps) [2024-11-29T18:35:46.033Z] Copying: 672/1024 [MB] (20 MBps) [2024-11-29T18:35:46.978Z] Copying: 710/1024 [MB] (37 MBps) [2024-11-29T18:35:47.921Z] Copying: 745/1024 [MB] (35 MBps) [2024-11-29T18:35:48.866Z] Copying: 779/1024 [MB] (33 MBps) [2024-11-29T18:35:49.811Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-29T18:35:50.755Z] Copying: 826/1024 [MB] (21 MBps) [2024-11-29T18:35:51.698Z] Copying: 846/1024 [MB] (20 MBps) [2024-11-29T18:35:53.087Z] Copying: 872/1024 [MB] (26 MBps) [2024-11-29T18:35:54.032Z] Copying: 908/1024 [MB] (35 MBps) [2024-11-29T18:35:54.978Z] Copying: 927/1024 [MB] (19 MBps) [2024-11-29T18:35:55.974Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-29T18:35:56.939Z] Copying: 966/1024 [MB] (27 MBps) [2024-11-29T18:35:57.881Z] Copying: 982/1024 [MB] (15 MBps) [2024-11-29T18:35:58.826Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-29T18:35:59.768Z] Copying: 1016/1024 [MB] (19 MBps) [2024-11-29T18:36:00.028Z] Copying: 1048312/1048576 [kB] (6968 kBps) [2024-11-29T18:36:00.028Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-29 18:35:59.911459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.911595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.123 [2024-11-29 18:35:59.911611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:40.123 [2024-11-29 18:35:59.911618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.913334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.123 [2024-11-29 18:35:59.915157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.915196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.123 [2024-11-29 18:35:59.915206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:22:40.123 [2024-11-29 18:35:59.915213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.924968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.924998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.123 [2024-11-29 18:35:59.925006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.186 ms 00:22:40.123 [2024-11-29 18:35:59.925012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.941134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.941168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.123 [2024-11-29 18:35:59.941175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.109 ms 00:22:40.123 [2024-11-29 18:35:59.941184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.945944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.945967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.123 [2024-11-29 18:35:59.945975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:22:40.123 [2024-11-29 18:35:59.945982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.946857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.946886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.123 [2024-11-29 18:35:59.946894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:22:40.123 [2024-11-29 18:35:59.946900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.123 [2024-11-29 18:35:59.949978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.123 [2024-11-29 18:35:59.950011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.123 [2024-11-29 18:35:59.950019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:22:40.123 [2024-11-29 18:35:59.950027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.028343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.385 [2024-11-29 18:36:00.028383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.385 [2024-11-29 18:36:00.028398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.283 ms 00:22:40.385 [2024-11-29 18:36:00.028410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.030143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.385 [2024-11-29 18:36:00.030170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.385 [2024-11-29 18:36:00.030177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:22:40.385 [2024-11-29 18:36:00.030183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.031480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.385 [2024-11-29 18:36:00.031505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.385 [2024-11-29 18:36:00.031512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:22:40.385 [2024-11-29 18:36:00.031518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.032272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.385 [2024-11-29 18:36:00.032300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.385 [2024-11-29 18:36:00.032306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:22:40.385 [2024-11-29 18:36:00.032312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.032991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.385 [2024-11-29 18:36:00.033017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.385 [2024-11-29 18:36:00.033024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:22:40.385 [2024-11-29 18:36:00.033030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.385 [2024-11-29 18:36:00.033050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.385 [2024-11-29 18:36:00.033061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:22:40.385 [2024-11-29 18:36:00.033069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.385 [2024-11-29 18:36:00.033354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.386 [2024-11-29 18:36:00.033672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.386 [2024-11-29 18:36:00.033678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:22:40.386 [2024-11-29 18:36:00.033685] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:22:40.386 [2024-11-29 18:36:00.033697] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:22:40.386 [2024-11-29 18:36:00.033702] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:22:40.386 [2024-11-29 18:36:00.033709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:22:40.386 [2024-11-29 18:36:00.033714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.386 [2024-11-29 18:36:00.033720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.386 [2024-11-29 18:36:00.033726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.386 [2024-11-29 18:36:00.033738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.386 [2024-11-29 18:36:00.033743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.386 [2024-11-29 18:36:00.033749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.386 [2024-11-29 18:36:00.033755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.386 [2024-11-29 18:36:00.033761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:22:40.386 [2024-11-29 18:36:00.033767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.386 [2024-11-29 18:36:00.035019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.386 [2024-11-29 18:36:00.035040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.386 [2024-11-29 18:36:00.035047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:22:40.386 [2024-11-29 18:36:00.035058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.386 [2024-11-29 18:36:00.035124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.386 [2024-11-29 18:36:00.035131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.386 [2024-11-29 18:36:00.035137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:40.386 [2024-11-29 18:36:00.035146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.386 [2024-11-29 18:36:00.039309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.386 [2024-11-29 18:36:00.039331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.386 [2024-11-29 18:36:00.039339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.039345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.039388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.039396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.387 [2024-11-29 18:36:00.039402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.039412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.039469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.039478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.387 [2024-11-29 18:36:00.039484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.039490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.039502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.039508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.387 [2024-11-29 18:36:00.039514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.039520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.047010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.047045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.387 [2024-11-29 18:36:00.047053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.047060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.387 [2024-11-29 18:36:00.053193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.387 [2024-11-29 18:36:00.053235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.387 [2024-11-29 18:36:00.053285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.387 [2024-11-29 18:36:00.053368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.387 [2024-11-29 18:36:00.053413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.387 [2024-11-29 18:36:00.053476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.387 [2024-11-29 18:36:00.053523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.387 [2024-11-29 18:36:00.053529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.387 [2024-11-29 18:36:00.053535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.387 [2024-11-29 18:36:00.053632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 144.447 ms, result 0 00:22:40.957 00:22:40.957 00:22:40.957 18:36:00 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:40.957 [2024-11-29 18:36:00.805875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:22:40.957 [2024-11-29 18:36:00.806002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90264 ] 00:22:41.216 [2024-11-29 18:36:00.964633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.216 [2024-11-29 18:36:00.985558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.216 [2024-11-29 18:36:01.086506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:41.216 [2024-11-29 18:36:01.086591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:41.476 [2024-11-29 18:36:01.247342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.247409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:41.476 [2024-11-29 18:36:01.247424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:41.476 [2024-11-29 18:36:01.247433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.247509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.247524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.476 [2024-11-29 18:36:01.247533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:41.476 [2024-11-29 18:36:01.247541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.247572] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:41.476 [2024-11-29 18:36:01.248169] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:41.476 [2024-11-29 18:36:01.248235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.248246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.476 [2024-11-29 18:36:01.248260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:22:41.476 [2024-11-29 18:36:01.248272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.249973] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:41.476 [2024-11-29 18:36:01.253911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.253965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:41.476 [2024-11-29 18:36:01.253984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:22:41.476 [2024-11-29 18:36:01.253995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.254081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.254097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:41.476 [2024-11-29 18:36:01.254106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:41.476 [2024-11-29 18:36:01.254114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.262397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.262445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.476 [2024-11-29 18:36:01.262475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.234 ms 00:22:41.476 [2024-11-29 18:36:01.262483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.262581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.262591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.476 [2024-11-29 18:36:01.262600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:41.476 [2024-11-29 18:36:01.262610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.262673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.262688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:41.476 [2024-11-29 18:36:01.262702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:41.476 [2024-11-29 18:36:01.262712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.262735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:41.476 [2024-11-29 18:36:01.264759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.264793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.476 [2024-11-29 18:36:01.264804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.031 ms 00:22:41.476 [2024-11-29 18:36:01.264812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.264848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.264856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:41.476 [2024-11-29 18:36:01.264865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:41.476 [2024-11-29 18:36:01.264881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.264904] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:41.476 [2024-11-29 18:36:01.264925] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:41.476 [2024-11-29 18:36:01.264962] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:41.476 [2024-11-29 18:36:01.264978] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:41.476 [2024-11-29 18:36:01.265083] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:41.476 [2024-11-29 18:36:01.265094] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:41.476 [2024-11-29 18:36:01.265109] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:41.476 [2024-11-29 18:36:01.265119] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:41.476 [2024-11-29 18:36:01.265144] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:41.476 [2024-11-29 18:36:01.265151] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:41.476 [2024-11-29 18:36:01.265160] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:41.476 [2024-11-29 18:36:01.265168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.265177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:41.476 [2024-11-29 18:36:01.265195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:41.476 [2024-11-29 18:36:01.265203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.265288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.476 [2024-11-29 18:36:01.265338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:41.476 [2024-11-29 18:36:01.265347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:41.476 [2024-11-29 18:36:01.265355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.476 [2024-11-29 18:36:01.265481] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:41.476 [2024-11-29 18:36:01.265500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:41.476 [2024-11-29 18:36:01.265515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:41.476 [2024-11-29 18:36:01.265552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:41.476 [2024-11-29 18:36:01.265579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.476 [2024-11-29 18:36:01.265599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:41.476 [2024-11-29 18:36:01.265607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:41.476 [2024-11-29 18:36:01.265616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.476 [2024-11-29 18:36:01.265624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:41.476 [2024-11-29 18:36:01.265632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:41.476 [2024-11-29 18:36:01.265640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:41.476 [2024-11-29 18:36:01.265656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:41.476 [2024-11-29 18:36:01.265679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:41.476 [2024-11-29 18:36:01.265703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:41.476 [2024-11-29 18:36:01.265728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:41.476 [2024-11-29 18:36:01.265752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.476 [2024-11-29 18:36:01.265767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:41.476 [2024-11-29 18:36:01.265774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.476 [2024-11-29 18:36:01.265789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:41.476 [2024-11-29 18:36:01.265797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:41.476 [2024-11-29 18:36:01.265804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.476 [2024-11-29 18:36:01.265811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:41.476 [2024-11-29 18:36:01.265821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:41.476 [2024-11-29 18:36:01.265828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.476 [2024-11-29 18:36:01.265836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:41.477 [2024-11-29 18:36:01.265844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:41.477 [2024-11-29 18:36:01.265854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.477 [2024-11-29 18:36:01.265862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:41.477 [2024-11-29 18:36:01.265873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:41.477 [2024-11-29 18:36:01.265882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.477 [2024-11-29 18:36:01.265890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.477 [2024-11-29 18:36:01.265899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:41.477 [2024-11-29 18:36:01.265906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:41.477 [2024-11-29 18:36:01.265912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:41.477 [2024-11-29 18:36:01.265919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:41.477 [2024-11-29 18:36:01.265925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:41.477 [2024-11-29 18:36:01.265931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:41.477 [2024-11-29 18:36:01.265940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:41.477 [2024-11-29 18:36:01.265949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.265959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:41.477 [2024-11-29 18:36:01.265967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:41.477 [2024-11-29 18:36:01.265974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:41.477 [2024-11-29 18:36:01.265982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:41.477 [2024-11-29 18:36:01.265989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:41.477 [2024-11-29 18:36:01.265996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:41.477 [2024-11-29 18:36:01.266003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:41.477 [2024-11-29 18:36:01.266010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:41.477 [2024-11-29 18:36:01.266018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:41.477 [2024-11-29 18:36:01.266025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:41.477 [2024-11-29 18:36:01.266074] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:41.477 [2024-11-29 18:36:01.266084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:41.477 [2024-11-29 18:36:01.266100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:41.477 [2024-11-29 18:36:01.266107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:41.477 [2024-11-29 18:36:01.266117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:41.477 [2024-11-29 18:36:01.266125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.266133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:41.477 [2024-11-29 18:36:01.266141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:22:41.477 [2024-11-29 18:36:01.266151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.279708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.279755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.477 [2024-11-29 18:36:01.279766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.503 ms 00:22:41.477 [2024-11-29 18:36:01.279774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.279860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.279869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:41.477 [2024-11-29 18:36:01.279877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:41.477 [2024-11-29 18:36:01.279885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.299836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.299912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.477 [2024-11-29 18:36:01.299930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.894 ms 00:22:41.477 [2024-11-29 18:36:01.299954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.300018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.300033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.477 [2024-11-29 18:36:01.300046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:41.477 [2024-11-29 18:36:01.300063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.300701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.300752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.477 [2024-11-29 18:36:01.300769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:22:41.477 [2024-11-29 18:36:01.300782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.300987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.301002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.477 [2024-11-29 18:36:01.301014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:22:41.477 [2024-11-29 18:36:01.301026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.309601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.309650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.477 [2024-11-29 18:36:01.309662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.548 ms 00:22:41.477 [2024-11-29 18:36:01.309669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.313684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:41.477 [2024-11-29 18:36:01.313740] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:41.477 [2024-11-29 18:36:01.313757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.313765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:41.477 [2024-11-29 18:36:01.313774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:22:41.477 [2024-11-29 18:36:01.313782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.330480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.330547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:41.477 [2024-11-29 18:36:01.330559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.644 ms 00:22:41.477 [2024-11-29 18:36:01.330567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.333718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.333767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:41.477 [2024-11-29 18:36:01.333777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:22:41.477 [2024-11-29 18:36:01.333785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.336504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.336549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:41.477 [2024-11-29 18:36:01.336560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:22:41.477 [2024-11-29 18:36:01.336575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.336914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.336934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:41.477 [2024-11-29 18:36:01.336944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:41.477 [2024-11-29 18:36:01.336952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.361054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.361117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:41.477 [2024-11-29 18:36:01.361130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:22:41.477 [2024-11-29 18:36:01.361138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.369568] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:41.477 [2024-11-29 18:36:01.372501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.372543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:41.477 [2024-11-29 18:36:01.372560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.312 ms 00:22:41.477 [2024-11-29 18:36:01.372574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.372644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.372654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:41.477 [2024-11-29 18:36:01.372663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:41.477 [2024-11-29 18:36:01.372676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.477 [2024-11-29 18:36:01.374409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.477 [2024-11-29 18:36:01.374474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:41.477 [2024-11-29 18:36:01.374485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:22:41.478 [2024-11-29 18:36:01.374493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.478 [2024-11-29 18:36:01.374520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.478 [2024-11-29 18:36:01.374529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:41.478 [2024-11-29 18:36:01.374537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:41.478 [2024-11-29 18:36:01.374545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.478 [2024-11-29 18:36:01.374587] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:41.478 [2024-11-29 18:36:01.374601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.478 [2024-11-29 18:36:01.374609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:41.478 [2024-11-29 18:36:01.374620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:41.478 [2024-11-29 18:36:01.374628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.737 [2024-11-29 18:36:01.379599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.737 [2024-11-29 18:36:01.379649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:41.737 [2024-11-29 18:36:01.379659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:22:41.737 [2024-11-29 18:36:01.379667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.737 [2024-11-29 18:36:01.379751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.737 [2024-11-29 18:36:01.379761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:41.737 [2024-11-29 18:36:01.379771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:41.737 [2024-11-29 18:36:01.379781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.737 [2024-11-29 18:36:01.380860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.058 ms, result 0 00:22:42.679  [2024-11-29T18:36:03.967Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-29T18:36:04.911Z] Copying: 43/1024 [MB] (22 MBps) [2024-11-29T18:36:05.856Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-29T18:36:06.801Z] Copying: 76/1024 [MB] (17 MBps) [2024-11-29T18:36:07.747Z] Copying: 95/1024 [MB] (19 MBps) [2024-11-29T18:36:08.694Z] Copying: 106/1024 [MB] (10 MBps) [2024-11-29T18:36:09.640Z] Copying: 123/1024 [MB] (17 MBps) [2024-11-29T18:36:10.584Z] Copying: 141/1024 [MB] (18 MBps) [2024-11-29T18:36:11.974Z] Copying: 159/1024 [MB] (18 MBps) [2024-11-29T18:36:12.920Z] Copying: 175/1024 [MB] (16 MBps) [2024-11-29T18:36:13.865Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-29T18:36:14.810Z] Copying: 201/1024 [MB] (14 MBps) [2024-11-29T18:36:15.752Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-29T18:36:16.702Z] Copying: 230/1024 [MB] (18 MBps) [2024-11-29T18:36:17.647Z] Copying: 244/1024 [MB] (13 MBps) [2024-11-29T18:36:18.591Z] Copying: 258/1024 [MB] (14 MBps) [2024-11-29T18:36:19.977Z] Copying: 276/1024 [MB] (18 MBps) [2024-11-29T18:36:20.920Z] Copying: 296/1024 [MB] (20 MBps) [2024-11-29T18:36:21.864Z] Copying: 310/1024 [MB] (13 MBps) [2024-11-29T18:36:22.810Z] Copying: 323/1024 [MB] (13 MBps) [2024-11-29T18:36:23.756Z] Copying: 341/1024 [MB] (17 MBps) [2024-11-29T18:36:24.699Z] Copying: 356/1024 [MB] (14 MBps) [2024-11-29T18:36:25.641Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-29T18:36:26.593Z] Copying: 381/1024 [MB] (13 MBps) [2024-11-29T18:36:27.600Z] Copying: 395/1024 [MB] (14 MBps) [2024-11-29T18:36:28.989Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-29T18:36:29.932Z] Copying: 417/1024 [MB] (10 MBps) [2024-11-29T18:36:30.878Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-29T18:36:31.823Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-29T18:36:32.768Z] Copying: 457/1024 [MB] (18 MBps) [2024-11-29T18:36:33.712Z] Copying: 469/1024 [MB] (12 MBps) [2024-11-29T18:36:34.657Z] Copying: 488/1024 [MB] (18 MBps) [2024-11-29T18:36:35.602Z] Copying: 504/1024 [MB] (15 MBps) [2024-11-29T18:36:36.988Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-29T18:36:37.934Z] Copying: 527/1024 [MB] (12 MBps) [2024-11-29T18:36:38.879Z] Copying: 542/1024 [MB] (15 MBps) [2024-11-29T18:36:39.821Z] Copying: 555/1024 [MB] (12 MBps) [2024-11-29T18:36:40.766Z] Copying: 574/1024 [MB] (19 MBps) [2024-11-29T18:36:41.711Z] Copying: 588/1024 [MB] (13 MBps) [2024-11-29T18:36:42.655Z] Copying: 603/1024 [MB] (15 MBps) [2024-11-29T18:36:43.601Z] Copying: 620/1024 [MB] (16 MBps) [2024-11-29T18:36:44.986Z] Copying: 634/1024 [MB] (13 MBps) [2024-11-29T18:36:45.931Z] Copying: 648/1024 [MB] (13 MBps) [2024-11-29T18:36:46.876Z] Copying: 661/1024 [MB] (13 MBps) [2024-11-29T18:36:47.822Z] Copying: 679/1024 [MB] (17 MBps) [2024-11-29T18:36:48.766Z] Copying: 693/1024 [MB] (14 MBps) [2024-11-29T18:36:49.710Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-29T18:36:50.654Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-29T18:36:51.599Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-29T18:36:52.985Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-29T18:36:53.928Z] Copying: 747/1024 [MB] (10 MBps) [2024-11-29T18:36:54.872Z] Copying: 768/1024 [MB] (21 MBps) [2024-11-29T18:36:55.815Z] Copying: 785/1024 [MB] (16 MBps) [2024-11-29T18:36:56.762Z] Copying: 802/1024 [MB] (16 MBps) [2024-11-29T18:36:57.707Z] Copying: 820/1024 [MB] (18 MBps) [2024-11-29T18:36:58.715Z] Copying: 839/1024 [MB] (18 MBps) [2024-11-29T18:36:59.672Z] Copying: 854/1024 [MB] (15 MBps) [2024-11-29T18:37:00.617Z] Copying: 880/1024 [MB] (25 MBps) [2024-11-29T18:37:02.007Z] Copying: 896/1024 [MB] (15 MBps) [2024-11-29T18:37:02.581Z] Copying: 915/1024 [MB] (19 MBps) [2024-11-29T18:37:03.968Z] Copying: 929/1024 [MB] (13 MBps) [2024-11-29T18:37:04.913Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-29T18:37:05.853Z] Copying: 963/1024 [MB] (22 MBps) [2024-11-29T18:37:06.794Z] Copying: 987/1024 [MB] (24 MBps) [2024-11-29T18:37:07.735Z] Copying: 1013/1024 [MB] (26 MBps) [2024-11-29T18:37:08.308Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 18:37:08.058359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.058449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.403 [2024-11-29 18:37:08.058513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.403 [2024-11-29 18:37:08.058531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.058577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:48.403 [2024-11-29 18:37:08.059229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.059257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:48.403 [2024-11-29 18:37:08.060010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:23:48.403 [2024-11-29 18:37:08.060028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.060520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.060573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:48.403 [2024-11-29 18:37:08.060599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:23:48.403 [2024-11-29 18:37:08.060615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.069042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.069078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:48.403 [2024-11-29 18:37:08.069096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.397 ms 00:23:48.403 [2024-11-29 18:37:08.069110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.075336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.075365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:48.403 [2024-11-29 18:37:08.075375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:23:48.403 [2024-11-29 18:37:08.075383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.077205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.077238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:48.403 [2024-11-29 18:37:08.077247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:23:48.403 [2024-11-29 18:37:08.077254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.403 [2024-11-29 18:37:08.080949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.403 [2024-11-29 18:37:08.080979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:48.403 [2024-11-29 18:37:08.080986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:23:48.403 [2024-11-29 18:37:08.080996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.361942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.664 [2024-11-29 18:37:08.361981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:48.664 [2024-11-29 18:37:08.361991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 280.913 ms 00:23:48.664 [2024-11-29 18:37:08.362005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.364603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.664 [2024-11-29 18:37:08.364630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:48.664 [2024-11-29 18:37:08.364637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:23:48.664 [2024-11-29 18:37:08.364643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.366475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.664 [2024-11-29 18:37:08.366501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:48.664 [2024-11-29 18:37:08.366507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:23:48.664 [2024-11-29 18:37:08.366512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.368130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.664 [2024-11-29 18:37:08.368160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:48.664 [2024-11-29 18:37:08.368167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:23:48.664 [2024-11-29 18:37:08.368172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.369702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.664 [2024-11-29 18:37:08.369729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.664 [2024-11-29 18:37:08.369736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:23:48.664 [2024-11-29 18:37:08.369742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.664 [2024-11-29 18:37:08.369764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.664 [2024-11-29 18:37:08.369775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:48.664 [2024-11-29 18:37:08.369783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.369997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.664 [2024-11-29 18:37:08.370114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.665 [2024-11-29 18:37:08.370371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.665 [2024-11-29 18:37:08.370377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5cc8cc9-2901-46c0-b9e1-4e931c6522e6 00:23:48.665 [2024-11-29 18:37:08.370383] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:48.665 [2024-11-29 18:37:08.370395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:23:48.665 [2024-11-29 18:37:08.370400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:23:48.665 [2024-11-29 18:37:08.370407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:23:48.665 [2024-11-29 18:37:08.370412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.665 [2024-11-29 18:37:08.370418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.665 [2024-11-29 18:37:08.370424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.665 [2024-11-29 18:37:08.370429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.665 [2024-11-29 18:37:08.370438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.665 [2024-11-29 18:37:08.370444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.665 [2024-11-29 18:37:08.370450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.665 [2024-11-29 18:37:08.370466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:23:48.665 [2024-11-29 18:37:08.370471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.371719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.665 [2024-11-29 18:37:08.371739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.665 [2024-11-29 18:37:08.371747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:23:48.665 [2024-11-29 18:37:08.371753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.371819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.665 [2024-11-29 18:37:08.371825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.665 [2024-11-29 18:37:08.371831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:48.665 [2024-11-29 18:37:08.371841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.375997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.376019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.665 [2024-11-29 18:37:08.376026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.376032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.376070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.376076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.665 [2024-11-29 18:37:08.376082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.376091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.376117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.376124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.665 [2024-11-29 18:37:08.376130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.376136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.376147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.376154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.665 [2024-11-29 18:37:08.376160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.376169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.383745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.383778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.665 [2024-11-29 18:37:08.383787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.383793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.389896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.389929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.665 [2024-11-29 18:37:08.389936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.389943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.389985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.389992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.665 [2024-11-29 18:37:08.389998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.390005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.390023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.390052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.665 [2024-11-29 18:37:08.390058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.390064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.665 [2024-11-29 18:37:08.390112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.665 [2024-11-29 18:37:08.390122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.665 [2024-11-29 18:37:08.390127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.665 [2024-11-29 18:37:08.390133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.666 [2024-11-29 18:37:08.390153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.666 [2024-11-29 18:37:08.390160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:48.666 [2024-11-29 18:37:08.390166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.666 [2024-11-29 18:37:08.390172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.666 [2024-11-29 18:37:08.390202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.666 [2024-11-29 18:37:08.390211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.666 [2024-11-29 18:37:08.390217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.666 [2024-11-29 18:37:08.390223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.666 [2024-11-29 18:37:08.390252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.666 [2024-11-29 18:37:08.390259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.666 [2024-11-29 18:37:08.390266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.666 [2024-11-29 18:37:08.390271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.666 [2024-11-29 18:37:08.390363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.019 ms, result 0 00:23:48.666 00:23:48.666 00:23:48.666 18:37:08 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:51.204 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:51.204 Process with pid 88405 is not found 00:23:51.204 Remove shared memory files 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88405 00:23:51.204 18:37:10 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88405 ']' 00:23:51.204 18:37:10 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88405 00:23:51.204 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88405) - No such process 00:23:51.204 18:37:10 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88405 is not found' 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:51.204 18:37:10 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:51.204 00:23:51.204 real 4m7.982s 00:23:51.204 user 3m56.335s 00:23:51.204 sys 0m11.414s 00:23:51.204 18:37:10 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.204 18:37:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:51.204 ************************************ 00:23:51.204 END TEST ftl_restore 00:23:51.204 ************************************ 00:23:51.205 18:37:10 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:51.205 18:37:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:51.205 18:37:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.205 18:37:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:51.205 ************************************ 00:23:51.205 START TEST ftl_dirty_shutdown 00:23:51.205 ************************************ 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:51.205 * Looking for test storage... 00:23:51.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.205 --rc genhtml_branch_coverage=1 00:23:51.205 --rc genhtml_function_coverage=1 00:23:51.205 --rc genhtml_legend=1 00:23:51.205 --rc geninfo_all_blocks=1 00:23:51.205 --rc geninfo_unexecuted_blocks=1 00:23:51.205 00:23:51.205 ' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.205 --rc genhtml_branch_coverage=1 00:23:51.205 --rc genhtml_function_coverage=1 00:23:51.205 --rc genhtml_legend=1 00:23:51.205 --rc geninfo_all_blocks=1 00:23:51.205 --rc geninfo_unexecuted_blocks=1 00:23:51.205 00:23:51.205 ' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.205 --rc genhtml_branch_coverage=1 00:23:51.205 --rc genhtml_function_coverage=1 00:23:51.205 --rc genhtml_legend=1 00:23:51.205 --rc geninfo_all_blocks=1 00:23:51.205 --rc geninfo_unexecuted_blocks=1 00:23:51.205 00:23:51.205 ' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.205 --rc genhtml_branch_coverage=1 00:23:51.205 --rc genhtml_function_coverage=1 00:23:51.205 --rc genhtml_legend=1 00:23:51.205 --rc geninfo_all_blocks=1 00:23:51.205 --rc geninfo_unexecuted_blocks=1 00:23:51.205 00:23:51.205 ' 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.205 18:37:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91046 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91046 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91046 ']' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.205 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.205 [2024-11-29 18:37:11.075039] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:23:51.205 [2024-11-29 18:37:11.075414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91046 ] 00:23:51.465 [2024-11-29 18:37:11.224146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.465 [2024-11-29 18:37:11.241701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:52.035 18:37:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:52.295 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:52.557 { 00:23:52.557 "name": "nvme0n1", 00:23:52.557 "aliases": [ 00:23:52.557 "a66fc0e0-09b9-432f-b4e3-5b49728c1eac" 00:23:52.557 ], 00:23:52.557 "product_name": "NVMe disk", 00:23:52.557 "block_size": 4096, 00:23:52.557 "num_blocks": 1310720, 00:23:52.557 "uuid": "a66fc0e0-09b9-432f-b4e3-5b49728c1eac", 00:23:52.557 "numa_id": -1, 00:23:52.557 "assigned_rate_limits": { 00:23:52.557 "rw_ios_per_sec": 0, 00:23:52.557 "rw_mbytes_per_sec": 0, 00:23:52.557 "r_mbytes_per_sec": 0, 00:23:52.557 "w_mbytes_per_sec": 0 00:23:52.557 }, 00:23:52.557 "claimed": true, 00:23:52.557 "claim_type": "read_many_write_one", 00:23:52.557 "zoned": false, 00:23:52.557 "supported_io_types": { 00:23:52.557 "read": true, 00:23:52.557 "write": true, 00:23:52.557 "unmap": true, 00:23:52.557 "flush": true, 00:23:52.557 "reset": true, 00:23:52.557 "nvme_admin": true, 00:23:52.557 "nvme_io": true, 00:23:52.557 "nvme_io_md": false, 00:23:52.557 "write_zeroes": true, 00:23:52.557 "zcopy": false, 00:23:52.557 "get_zone_info": false, 00:23:52.557 "zone_management": false, 00:23:52.557 "zone_append": false, 00:23:52.557 "compare": true, 00:23:52.557 "compare_and_write": false, 00:23:52.557 "abort": true, 00:23:52.557 "seek_hole": false, 00:23:52.557 "seek_data": false, 00:23:52.557 "copy": true, 00:23:52.557 "nvme_iov_md": false 00:23:52.557 }, 00:23:52.557 "driver_specific": { 00:23:52.557 "nvme": [ 00:23:52.557 { 00:23:52.557 "pci_address": "0000:00:11.0", 00:23:52.557 "trid": { 00:23:52.557 "trtype": "PCIe", 00:23:52.557 "traddr": "0000:00:11.0" 00:23:52.557 }, 00:23:52.557 "ctrlr_data": { 00:23:52.557 "cntlid": 0, 00:23:52.557 "vendor_id": "0x1b36", 00:23:52.557 "model_number": "QEMU NVMe Ctrl", 00:23:52.557 "serial_number": "12341", 00:23:52.557 "firmware_revision": "8.0.0", 00:23:52.557 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:52.557 "oacs": { 00:23:52.557 "security": 0, 00:23:52.557 "format": 1, 00:23:52.557 "firmware": 0, 00:23:52.557 "ns_manage": 1 00:23:52.557 }, 00:23:52.557 "multi_ctrlr": false, 00:23:52.557 "ana_reporting": false 00:23:52.557 }, 00:23:52.557 "vs": { 00:23:52.557 "nvme_version": "1.4" 00:23:52.557 }, 00:23:52.557 "ns_data": { 00:23:52.557 "id": 1, 00:23:52.557 "can_share": false 00:23:52.557 } 00:23:52.557 } 00:23:52.557 ], 00:23:52.557 "mp_policy": "active_passive" 00:23:52.557 } 00:23:52.557 } 00:23:52.557 ]' 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:52.557 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:52.817 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=343e90de-d3a9-47bd-a146-2f2cff852dc2 00:23:52.817 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:52.817 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 343e90de-d3a9-47bd-a146-2f2cff852dc2 00:23:53.077 18:37:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:53.337 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a62515a8-9ccc-4a82-873e-e92f8299ddb3 00:23:53.337 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a62515a8-9ccc-4a82-873e-e92f8299ddb3 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:53.596 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b95c708d-684e-4eb4-b526-e82a6519400b 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:53.857 { 00:23:53.857 "name": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:53.857 "aliases": [ 00:23:53.857 "lvs/nvme0n1p0" 00:23:53.857 ], 00:23:53.857 "product_name": "Logical Volume", 00:23:53.857 "block_size": 4096, 00:23:53.857 "num_blocks": 26476544, 00:23:53.857 "uuid": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:53.857 "assigned_rate_limits": { 00:23:53.857 "rw_ios_per_sec": 0, 00:23:53.857 "rw_mbytes_per_sec": 0, 00:23:53.857 "r_mbytes_per_sec": 0, 00:23:53.857 "w_mbytes_per_sec": 0 00:23:53.857 }, 00:23:53.857 "claimed": false, 00:23:53.857 "zoned": false, 00:23:53.857 "supported_io_types": { 00:23:53.857 "read": true, 00:23:53.857 "write": true, 00:23:53.857 "unmap": true, 00:23:53.857 "flush": false, 00:23:53.857 "reset": true, 00:23:53.857 "nvme_admin": false, 00:23:53.857 "nvme_io": false, 00:23:53.857 "nvme_io_md": false, 00:23:53.857 "write_zeroes": true, 00:23:53.857 "zcopy": false, 00:23:53.857 "get_zone_info": false, 00:23:53.857 "zone_management": false, 00:23:53.857 "zone_append": false, 00:23:53.857 "compare": false, 00:23:53.857 "compare_and_write": false, 00:23:53.857 "abort": false, 00:23:53.857 "seek_hole": true, 00:23:53.857 "seek_data": true, 00:23:53.857 "copy": false, 00:23:53.857 "nvme_iov_md": false 00:23:53.857 }, 00:23:53.857 "driver_specific": { 00:23:53.857 "lvol": { 00:23:53.857 "lvol_store_uuid": "a62515a8-9ccc-4a82-873e-e92f8299ddb3", 00:23:53.857 "base_bdev": "nvme0n1", 00:23:53.857 "thin_provision": true, 00:23:53.857 "num_allocated_clusters": 0, 00:23:53.857 "snapshot": false, 00:23:53.857 "clone": false, 00:23:53.857 "esnap_clone": false 00:23:53.857 } 00:23:53.857 } 00:23:53.857 } 00:23:53.857 ]' 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:53.857 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:54.117 18:37:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.377 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:54.378 { 00:23:54.378 "name": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:54.378 "aliases": [ 00:23:54.378 "lvs/nvme0n1p0" 00:23:54.378 ], 00:23:54.378 "product_name": "Logical Volume", 00:23:54.378 "block_size": 4096, 00:23:54.378 "num_blocks": 26476544, 00:23:54.378 "uuid": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:54.378 "assigned_rate_limits": { 00:23:54.378 "rw_ios_per_sec": 0, 00:23:54.378 "rw_mbytes_per_sec": 0, 00:23:54.378 "r_mbytes_per_sec": 0, 00:23:54.378 "w_mbytes_per_sec": 0 00:23:54.378 }, 00:23:54.378 "claimed": false, 00:23:54.378 "zoned": false, 00:23:54.378 "supported_io_types": { 00:23:54.378 "read": true, 00:23:54.378 "write": true, 00:23:54.378 "unmap": true, 00:23:54.378 "flush": false, 00:23:54.378 "reset": true, 00:23:54.378 "nvme_admin": false, 00:23:54.378 "nvme_io": false, 00:23:54.378 "nvme_io_md": false, 00:23:54.378 "write_zeroes": true, 00:23:54.378 "zcopy": false, 00:23:54.378 "get_zone_info": false, 00:23:54.378 "zone_management": false, 00:23:54.378 "zone_append": false, 00:23:54.378 "compare": false, 00:23:54.378 "compare_and_write": false, 00:23:54.378 "abort": false, 00:23:54.378 "seek_hole": true, 00:23:54.378 "seek_data": true, 00:23:54.378 "copy": false, 00:23:54.378 "nvme_iov_md": false 00:23:54.378 }, 00:23:54.378 "driver_specific": { 00:23:54.378 "lvol": { 00:23:54.378 "lvol_store_uuid": "a62515a8-9ccc-4a82-873e-e92f8299ddb3", 00:23:54.378 "base_bdev": "nvme0n1", 00:23:54.378 "thin_provision": true, 00:23:54.378 "num_allocated_clusters": 0, 00:23:54.378 "snapshot": false, 00:23:54.378 "clone": false, 00:23:54.378 "esnap_clone": false 00:23:54.378 } 00:23:54.378 } 00:23:54.378 } 00:23:54.378 ]' 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:54.378 18:37:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b95c708d-684e-4eb4-b526-e82a6519400b 00:23:54.638 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:54.638 { 00:23:54.638 "name": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:54.638 "aliases": [ 00:23:54.639 "lvs/nvme0n1p0" 00:23:54.639 ], 00:23:54.639 "product_name": "Logical Volume", 00:23:54.639 "block_size": 4096, 00:23:54.639 "num_blocks": 26476544, 00:23:54.639 "uuid": "b95c708d-684e-4eb4-b526-e82a6519400b", 00:23:54.639 "assigned_rate_limits": { 00:23:54.639 "rw_ios_per_sec": 0, 00:23:54.639 "rw_mbytes_per_sec": 0, 00:23:54.639 "r_mbytes_per_sec": 0, 00:23:54.639 "w_mbytes_per_sec": 0 00:23:54.639 }, 00:23:54.639 "claimed": false, 00:23:54.639 "zoned": false, 00:23:54.639 "supported_io_types": { 00:23:54.639 "read": true, 00:23:54.639 "write": true, 00:23:54.639 "unmap": true, 00:23:54.639 "flush": false, 00:23:54.639 "reset": true, 00:23:54.639 "nvme_admin": false, 00:23:54.639 "nvme_io": false, 00:23:54.639 "nvme_io_md": false, 00:23:54.639 "write_zeroes": true, 00:23:54.639 "zcopy": false, 00:23:54.639 "get_zone_info": false, 00:23:54.639 "zone_management": false, 00:23:54.639 "zone_append": false, 00:23:54.639 "compare": false, 00:23:54.639 "compare_and_write": false, 00:23:54.639 "abort": false, 00:23:54.639 "seek_hole": true, 00:23:54.639 "seek_data": true, 00:23:54.639 "copy": false, 00:23:54.639 "nvme_iov_md": false 00:23:54.639 }, 00:23:54.639 "driver_specific": { 00:23:54.639 "lvol": { 00:23:54.639 "lvol_store_uuid": "a62515a8-9ccc-4a82-873e-e92f8299ddb3", 00:23:54.639 "base_bdev": "nvme0n1", 00:23:54.639 "thin_provision": true, 00:23:54.639 "num_allocated_clusters": 0, 00:23:54.639 "snapshot": false, 00:23:54.639 "clone": false, 00:23:54.639 "esnap_clone": false 00:23:54.639 } 00:23:54.639 } 00:23:54.639 } 00:23:54.639 ]' 00:23:54.639 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:54.900 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:54.901 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b95c708d-684e-4eb4-b526-e82a6519400b --l2p_dram_limit 10' 00:23:54.901 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:54.901 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:54.901 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:54.901 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b95c708d-684e-4eb4-b526-e82a6519400b --l2p_dram_limit 10 -c nvc0n1p0 00:23:54.901 [2024-11-29 18:37:14.776542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.776584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:54.901 [2024-11-29 18:37:14.776594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:54.901 [2024-11-29 18:37:14.776602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.776648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.776659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.901 [2024-11-29 18:37:14.776666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:54.901 [2024-11-29 18:37:14.776677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.776693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:54.901 [2024-11-29 18:37:14.776922] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:54.901 [2024-11-29 18:37:14.776939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.776947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.901 [2024-11-29 18:37:14.776953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:23:54.901 [2024-11-29 18:37:14.776960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.777013] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dae69cb0-b2aa-49e1-af84-b0dd5275db04 00:23:54.901 [2024-11-29 18:37:14.777959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.777983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:54.901 [2024-11-29 18:37:14.777993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:54.901 [2024-11-29 18:37:14.777999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.782754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.782781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.901 [2024-11-29 18:37:14.782790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:23:54.901 [2024-11-29 18:37:14.782797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.782859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.782866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.901 [2024-11-29 18:37:14.782873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:54.901 [2024-11-29 18:37:14.782879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.782910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.782925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:54.901 [2024-11-29 18:37:14.782933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:54.901 [2024-11-29 18:37:14.782938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.782956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.901 [2024-11-29 18:37:14.784210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.784239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.901 [2024-11-29 18:37:14.784246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:23:54.901 [2024-11-29 18:37:14.784254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.784278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.784286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:54.901 [2024-11-29 18:37:14.784293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:54.901 [2024-11-29 18:37:14.784301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.784313] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:54.901 [2024-11-29 18:37:14.784424] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:54.901 [2024-11-29 18:37:14.784437] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:54.901 [2024-11-29 18:37:14.784450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:54.901 [2024-11-29 18:37:14.784474] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784490] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:54.901 [2024-11-29 18:37:14.784501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:54.901 [2024-11-29 18:37:14.784507] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:54.901 [2024-11-29 18:37:14.784514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:54.901 [2024-11-29 18:37:14.784520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.784527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:54.901 [2024-11-29 18:37:14.784533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:23:54.901 [2024-11-29 18:37:14.784539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.784602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.901 [2024-11-29 18:37:14.784612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:54.901 [2024-11-29 18:37:14.784618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:54.901 [2024-11-29 18:37:14.784628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.901 [2024-11-29 18:37:14.784701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:54.901 [2024-11-29 18:37:14.784709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:54.901 [2024-11-29 18:37:14.784716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:54.901 [2024-11-29 18:37:14.784736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:54.901 [2024-11-29 18:37:14.784752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.901 [2024-11-29 18:37:14.784763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:54.901 [2024-11-29 18:37:14.784770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:54.901 [2024-11-29 18:37:14.784775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.901 [2024-11-29 18:37:14.784783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:54.901 [2024-11-29 18:37:14.784788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:54.901 [2024-11-29 18:37:14.784794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:54.901 [2024-11-29 18:37:14.784807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:54.901 [2024-11-29 18:37:14.784824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:54.901 [2024-11-29 18:37:14.784843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:54.901 [2024-11-29 18:37:14.784858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:54.901 [2024-11-29 18:37:14.784879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.901 [2024-11-29 18:37:14.784892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:54.901 [2024-11-29 18:37:14.784897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:54.901 [2024-11-29 18:37:14.784904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.901 [2024-11-29 18:37:14.784910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:54.901 [2024-11-29 18:37:14.784917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:54.901 [2024-11-29 18:37:14.784923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.901 [2024-11-29 18:37:14.784931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:54.901 [2024-11-29 18:37:14.784936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:54.901 [2024-11-29 18:37:14.784943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.902 [2024-11-29 18:37:14.784949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:54.902 [2024-11-29 18:37:14.784956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:54.902 [2024-11-29 18:37:14.784961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.902 [2024-11-29 18:37:14.784967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:54.902 [2024-11-29 18:37:14.784973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:54.902 [2024-11-29 18:37:14.784983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.902 [2024-11-29 18:37:14.784988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.902 [2024-11-29 18:37:14.784997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:54.902 [2024-11-29 18:37:14.785002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:54.902 [2024-11-29 18:37:14.785009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:54.902 [2024-11-29 18:37:14.785015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:54.902 [2024-11-29 18:37:14.785020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:54.902 [2024-11-29 18:37:14.785025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:54.902 [2024-11-29 18:37:14.785034] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:54.902 [2024-11-29 18:37:14.785041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:54.902 [2024-11-29 18:37:14.785054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:54.902 [2024-11-29 18:37:14.785060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:54.902 [2024-11-29 18:37:14.785066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:54.902 [2024-11-29 18:37:14.785073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:54.902 [2024-11-29 18:37:14.785078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:54.902 [2024-11-29 18:37:14.785086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:54.902 [2024-11-29 18:37:14.785091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:54.902 [2024-11-29 18:37:14.785097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:54.902 [2024-11-29 18:37:14.785103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:54.902 [2024-11-29 18:37:14.785133] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:54.902 [2024-11-29 18:37:14.785139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:54.902 [2024-11-29 18:37:14.785151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:54.902 [2024-11-29 18:37:14.785158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:54.902 [2024-11-29 18:37:14.785163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:54.902 [2024-11-29 18:37:14.785170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.902 [2024-11-29 18:37:14.785175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:54.902 [2024-11-29 18:37:14.785183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:23:54.902 [2024-11-29 18:37:14.785188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.902 [2024-11-29 18:37:14.785223] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:54.902 [2024-11-29 18:37:14.785230] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:59.106 [2024-11-29 18:37:18.363847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.363932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:59.106 [2024-11-29 18:37:18.363953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3578.602 ms 00:23:59.106 [2024-11-29 18:37:18.363963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.378300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.378365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.106 [2024-11-29 18:37:18.378385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:23:59.106 [2024-11-29 18:37:18.378397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.378538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.378549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.106 [2024-11-29 18:37:18.378560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:59.106 [2024-11-29 18:37:18.378569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.391027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.391081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.106 [2024-11-29 18:37:18.391095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.399 ms 00:23:59.106 [2024-11-29 18:37:18.391106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.391141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.391151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.106 [2024-11-29 18:37:18.391162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:59.106 [2024-11-29 18:37:18.391170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.391764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.391796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.106 [2024-11-29 18:37:18.391810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:23:59.106 [2024-11-29 18:37:18.391819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.391948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.391959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.106 [2024-11-29 18:37:18.391971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:59.106 [2024-11-29 18:37:18.391980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.399984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.400032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.106 [2024-11-29 18:37:18.400044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.978 ms 00:23:59.106 [2024-11-29 18:37:18.400057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.424157] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.106 [2024-11-29 18:37:18.428691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.428753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.106 [2024-11-29 18:37:18.428771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.556 ms 00:23:59.106 [2024-11-29 18:37:18.428785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.521968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.522056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:59.106 [2024-11-29 18:37:18.522070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.126 ms 00:23:59.106 [2024-11-29 18:37:18.522093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.522304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.522319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.106 [2024-11-29 18:37:18.522328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:23:59.106 [2024-11-29 18:37:18.522338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.528625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.528687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:59.106 [2024-11-29 18:37:18.528702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.243 ms 00:23:59.106 [2024-11-29 18:37:18.528713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.533733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.533797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:59.106 [2024-11-29 18:37:18.533809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.969 ms 00:23:59.106 [2024-11-29 18:37:18.533819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.534218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.534234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.106 [2024-11-29 18:37:18.534244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:23:59.106 [2024-11-29 18:37:18.534257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.583939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.584004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:59.106 [2024-11-29 18:37:18.584020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.658 ms 00:23:59.106 [2024-11-29 18:37:18.584032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.591300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.591373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:59.106 [2024-11-29 18:37:18.591384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.212 ms 00:23:59.106 [2024-11-29 18:37:18.591395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.597592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.597653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:59.106 [2024-11-29 18:37:18.597663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.168 ms 00:23:59.106 [2024-11-29 18:37:18.597673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.604216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.604283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.106 [2024-11-29 18:37:18.604295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.515 ms 00:23:59.106 [2024-11-29 18:37:18.604309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.604343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.604362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.106 [2024-11-29 18:37:18.604371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.106 [2024-11-29 18:37:18.604381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.604507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.106 [2024-11-29 18:37:18.604522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.106 [2024-11-29 18:37:18.604531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:59.106 [2024-11-29 18:37:18.604545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.106 [2024-11-29 18:37:18.605726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3828.660 ms, result 0 00:23:59.106 { 00:23:59.106 "name": "ftl0", 00:23:59.106 "uuid": "dae69cb0-b2aa-49e1-af84-b0dd5275db04" 00:23:59.106 } 00:23:59.106 18:37:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:59.106 18:37:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:59.106 18:37:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:59.106 18:37:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:59.106 18:37:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:59.367 /dev/nbd0 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:59.367 1+0 records in 00:23:59.367 1+0 records out 00:23:59.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415691 s, 9.9 MB/s 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:59.367 18:37:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:59.367 [2024-11-29 18:37:19.177957] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:23:59.367 [2024-11-29 18:37:19.178133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91188 ] 00:23:59.627 [2024-11-29 18:37:19.341140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.627 [2024-11-29 18:37:19.369145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.569  [2024-11-29T18:37:21.858Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-29T18:37:22.798Z] Copying: 381/1024 [MB] (193 MBps) [2024-11-29T18:37:23.739Z] Copying: 645/1024 [MB] (263 MBps) [2024-11-29T18:37:23.999Z] Copying: 899/1024 [MB] (254 MBps) [2024-11-29T18:37:24.258Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:24:04.353 00:24:04.353 18:37:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:06.264 18:37:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:06.526 [2024-11-29 18:37:26.220632] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:24:06.526 [2024-11-29 18:37:26.220757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91265 ] 00:24:06.526 [2024-11-29 18:37:26.375632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.526 [2024-11-29 18:37:26.391808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.908  [2024-11-29T18:37:28.754Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-29T18:37:29.698Z] Copying: 36/1024 [MB] (17 MBps) [2024-11-29T18:37:30.670Z] Copying: 59/1024 [MB] (22 MBps) [2024-11-29T18:37:31.639Z] Copying: 75/1024 [MB] (16 MBps) [2024-11-29T18:37:32.582Z] Copying: 92/1024 [MB] (17 MBps) [2024-11-29T18:37:33.524Z] Copying: 107/1024 [MB] (15 MBps) [2024-11-29T18:37:34.466Z] Copying: 123/1024 [MB] (15 MBps) [2024-11-29T18:37:35.841Z] Copying: 143/1024 [MB] (19 MBps) [2024-11-29T18:37:36.782Z] Copying: 160/1024 [MB] (17 MBps) [2024-11-29T18:37:37.722Z] Copying: 179/1024 [MB] (19 MBps) [2024-11-29T18:37:38.664Z] Copying: 201/1024 [MB] (22 MBps) [2024-11-29T18:37:39.609Z] Copying: 221/1024 [MB] (19 MBps) [2024-11-29T18:37:40.551Z] Copying: 245/1024 [MB] (24 MBps) [2024-11-29T18:37:41.495Z] Copying: 271/1024 [MB] (25 MBps) [2024-11-29T18:37:42.873Z] Copying: 293/1024 [MB] (22 MBps) [2024-11-29T18:37:43.445Z] Copying: 325/1024 [MB] (31 MBps) [2024-11-29T18:37:44.826Z] Copying: 344/1024 [MB] (19 MBps) [2024-11-29T18:37:45.767Z] Copying: 368/1024 [MB] (23 MBps) [2024-11-29T18:37:46.706Z] Copying: 391/1024 [MB] (22 MBps) [2024-11-29T18:37:47.649Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-29T18:37:48.593Z] Copying: 443/1024 [MB] (26 MBps) [2024-11-29T18:37:49.534Z] Copying: 468/1024 [MB] (24 MBps) [2024-11-29T18:37:50.478Z] Copying: 496/1024 [MB] (28 MBps) [2024-11-29T18:37:51.858Z] Copying: 520/1024 [MB] (23 MBps) [2024-11-29T18:37:52.799Z] Copying: 548/1024 [MB] (28 MBps) [2024-11-29T18:37:53.738Z] Copying: 569/1024 [MB] (20 MBps) [2024-11-29T18:37:54.680Z] Copying: 599/1024 [MB] (30 MBps) [2024-11-29T18:37:55.621Z] Copying: 631/1024 [MB] (31 MBps) [2024-11-29T18:37:56.561Z] Copying: 658/1024 [MB] (27 MBps) [2024-11-29T18:37:57.506Z] Copying: 687/1024 [MB] (28 MBps) [2024-11-29T18:37:58.443Z] Copying: 713/1024 [MB] (26 MBps) [2024-11-29T18:37:59.817Z] Copying: 739456/1048576 [kB] (8464 kBps) [2024-11-29T18:38:00.753Z] Copying: 748016/1048576 [kB] (8560 kBps) [2024-11-29T18:38:01.693Z] Copying: 757808/1048576 [kB] (9792 kBps) [2024-11-29T18:38:02.640Z] Copying: 770/1024 [MB] (30 MBps) [2024-11-29T18:38:03.613Z] Copying: 804/1024 [MB] (33 MBps) [2024-11-29T18:38:04.554Z] Copying: 840/1024 [MB] (36 MBps) [2024-11-29T18:38:05.492Z] Copying: 874/1024 [MB] (33 MBps) [2024-11-29T18:38:06.877Z] Copying: 906/1024 [MB] (32 MBps) [2024-11-29T18:38:07.448Z] Copying: 933/1024 [MB] (27 MBps) [2024-11-29T18:38:08.830Z] Copying: 963/1024 [MB] (29 MBps) [2024-11-29T18:38:09.400Z] Copying: 995/1024 [MB] (32 MBps) [2024-11-29T18:38:09.658Z] Copying: 1024/1024 [MB] (average 23 MBps) 00:24:49.754 00:24:49.754 18:38:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:49.754 18:38:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:50.014 18:38:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:50.276 [2024-11-29 18:38:09.936216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.936290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.276 [2024-11-29 18:38:09.936310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:50.276 [2024-11-29 18:38:09.936320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.936349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:50.276 [2024-11-29 18:38:09.937260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.937308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:50.276 [2024-11-29 18:38:09.937319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:24:50.276 [2024-11-29 18:38:09.937330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.939862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.939919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:50.276 [2024-11-29 18:38:09.939937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.504 ms 00:24:50.276 [2024-11-29 18:38:09.939950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.960441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.960504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:50.276 [2024-11-29 18:38:09.960517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.469 ms 00:24:50.276 [2024-11-29 18:38:09.960532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.966748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.966796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:50.276 [2024-11-29 18:38:09.966808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:24:50.276 [2024-11-29 18:38:09.966818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.969910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.969981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:50.276 [2024-11-29 18:38:09.969992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:24:50.276 [2024-11-29 18:38:09.970003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.977440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.977523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:50.276 [2024-11-29 18:38:09.977537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.375 ms 00:24:50.276 [2024-11-29 18:38:09.977550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.977707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.977725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.276 [2024-11-29 18:38:09.977743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:50.276 [2024-11-29 18:38:09.977755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.981183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.981236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.276 [2024-11-29 18:38:09.981247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:24:50.276 [2024-11-29 18:38:09.981258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.984079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.984139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.276 [2024-11-29 18:38:09.984150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:24:50.276 [2024-11-29 18:38:09.984160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.986620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.986673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.276 [2024-11-29 18:38:09.986683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:24:50.276 [2024-11-29 18:38:09.986694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.988757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.276 [2024-11-29 18:38:09.988809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.276 [2024-11-29 18:38:09.988820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:24:50.276 [2024-11-29 18:38:09.988830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.276 [2024-11-29 18:38:09.988873] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.276 [2024-11-29 18:38:09.988896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.988990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.276 [2024-11-29 18:38:09.989058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.277 [2024-11-29 18:38:09.989927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.277 [2024-11-29 18:38:09.989937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dae69cb0-b2aa-49e1-af84-b0dd5275db04 00:24:50.277 [2024-11-29 18:38:09.989947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:50.277 [2024-11-29 18:38:09.989956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:50.277 [2024-11-29 18:38:09.989965] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:50.277 [2024-11-29 18:38:09.989975] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:50.277 [2024-11-29 18:38:09.989985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.278 [2024-11-29 18:38:09.989993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.278 [2024-11-29 18:38:09.990004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.278 [2024-11-29 18:38:09.990011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.278 [2024-11-29 18:38:09.990020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.278 [2024-11-29 18:38:09.990046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.278 [2024-11-29 18:38:09.990056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.278 [2024-11-29 18:38:09.990069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:24:50.278 [2024-11-29 18:38:09.990081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:09.993183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.278 [2024-11-29 18:38:09.993230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.278 [2024-11-29 18:38:09.993241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:24:50.278 [2024-11-29 18:38:09.993253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:09.993410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.278 [2024-11-29 18:38:09.993431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.278 [2024-11-29 18:38:09.993441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:50.278 [2024-11-29 18:38:09.993469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.004173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.004223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.278 [2024-11-29 18:38:10.004236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.004248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.004328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.004348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.278 [2024-11-29 18:38:10.004357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.004367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.004448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.004489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.278 [2024-11-29 18:38:10.004499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.004509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.004528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.004542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.278 [2024-11-29 18:38:10.004557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.004568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.024608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.024674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.278 [2024-11-29 18:38:10.024687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.024704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.040814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.040887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.278 [2024-11-29 18:38:10.040900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.040912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.278 [2024-11-29 18:38:10.041049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.278 [2024-11-29 18:38:10.041138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.278 [2024-11-29 18:38:10.041268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.278 [2024-11-29 18:38:10.041345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.278 [2024-11-29 18:38:10.041522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.278 [2024-11-29 18:38:10.041609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.278 [2024-11-29 18:38:10.041617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.278 [2024-11-29 18:38:10.041631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.278 [2024-11-29 18:38:10.041812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 105.543 ms, result 0 00:24:50.278 true 00:24:50.278 18:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91046 00:24:50.278 18:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91046 00:24:50.278 18:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:50.278 [2024-11-29 18:38:10.136392] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:24:50.278 [2024-11-29 18:38:10.136573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91728 ] 00:24:50.538 [2024-11-29 18:38:10.297820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.538 [2024-11-29 18:38:10.325086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.917  [2024-11-29T18:38:12.757Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-29T18:38:13.692Z] Copying: 470/1024 [MB] (257 MBps) [2024-11-29T18:38:14.624Z] Copying: 727/1024 [MB] (256 MBps) [2024-11-29T18:38:14.624Z] Copying: 978/1024 [MB] (251 MBps) [2024-11-29T18:38:14.883Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:24:54.978 00:24:54.978 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91046 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:54.978 18:38:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:54.978 [2024-11-29 18:38:14.825361] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:24:54.978 [2024-11-29 18:38:14.825501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91775 ] 00:24:55.236 [2024-11-29 18:38:14.978556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.236 [2024-11-29 18:38:15.000231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.236 [2024-11-29 18:38:15.099504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:55.236 [2024-11-29 18:38:15.099556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:55.494 [2024-11-29 18:38:15.161595] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:55.494 [2024-11-29 18:38:15.161799] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:55.494 [2024-11-29 18:38:15.162013] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:55.753 [2024-11-29 18:38:15.484960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.484991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:55.753 [2024-11-29 18:38:15.485003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:55.753 [2024-11-29 18:38:15.485010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.485052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.485061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.753 [2024-11-29 18:38:15.485069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:55.753 [2024-11-29 18:38:15.485074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.485087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:55.753 [2024-11-29 18:38:15.485354] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:55.753 [2024-11-29 18:38:15.485396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.485402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.753 [2024-11-29 18:38:15.485412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:24:55.753 [2024-11-29 18:38:15.485419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.486676] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:55.753 [2024-11-29 18:38:15.489428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.489464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:55.753 [2024-11-29 18:38:15.489473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:24:55.753 [2024-11-29 18:38:15.489479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.489525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.489533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:55.753 [2024-11-29 18:38:15.489540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:55.753 [2024-11-29 18:38:15.489545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.495731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.495756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.753 [2024-11-29 18:38:15.495765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:24:55.753 [2024-11-29 18:38:15.495771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.495841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.495849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.753 [2024-11-29 18:38:15.495855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:55.753 [2024-11-29 18:38:15.495863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.495892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.495900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:55.753 [2024-11-29 18:38:15.495906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:55.753 [2024-11-29 18:38:15.495912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.495928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:55.753 [2024-11-29 18:38:15.497479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.497497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.753 [2024-11-29 18:38:15.497512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:24:55.753 [2024-11-29 18:38:15.497520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.497545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.497551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:55.753 [2024-11-29 18:38:15.497560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:55.753 [2024-11-29 18:38:15.497566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.497582] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:55.753 [2024-11-29 18:38:15.497600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:55.753 [2024-11-29 18:38:15.497634] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:55.753 [2024-11-29 18:38:15.497652] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:55.753 [2024-11-29 18:38:15.497736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:55.753 [2024-11-29 18:38:15.497745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:55.753 [2024-11-29 18:38:15.497755] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:55.753 [2024-11-29 18:38:15.497762] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:55.753 [2024-11-29 18:38:15.497769] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:55.753 [2024-11-29 18:38:15.497775] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:55.753 [2024-11-29 18:38:15.497783] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:55.753 [2024-11-29 18:38:15.497792] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:55.753 [2024-11-29 18:38:15.497803] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:55.753 [2024-11-29 18:38:15.497808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.497814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:55.753 [2024-11-29 18:38:15.497820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:55.753 [2024-11-29 18:38:15.497831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.497897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.753 [2024-11-29 18:38:15.497905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:55.753 [2024-11-29 18:38:15.497912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:55.753 [2024-11-29 18:38:15.497925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.753 [2024-11-29 18:38:15.498008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:55.753 [2024-11-29 18:38:15.498022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:55.753 [2024-11-29 18:38:15.498037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:55.753 [2024-11-29 18:38:15.498062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:55.753 [2024-11-29 18:38:15.498079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.753 [2024-11-29 18:38:15.498090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:55.753 [2024-11-29 18:38:15.498095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:55.753 [2024-11-29 18:38:15.498101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.753 [2024-11-29 18:38:15.498107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:55.753 [2024-11-29 18:38:15.498113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:55.753 [2024-11-29 18:38:15.498119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:55.753 [2024-11-29 18:38:15.498131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:55.753 [2024-11-29 18:38:15.498148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:55.753 [2024-11-29 18:38:15.498171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:55.753 [2024-11-29 18:38:15.498188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:55.753 [2024-11-29 18:38:15.498205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.753 [2024-11-29 18:38:15.498220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:55.753 [2024-11-29 18:38:15.498226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:55.753 [2024-11-29 18:38:15.498232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.753 [2024-11-29 18:38:15.498238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:55.753 [2024-11-29 18:38:15.498244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:55.753 [2024-11-29 18:38:15.498249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.753 [2024-11-29 18:38:15.498257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:55.753 [2024-11-29 18:38:15.498262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:55.754 [2024-11-29 18:38:15.498268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.754 [2024-11-29 18:38:15.498274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:55.754 [2024-11-29 18:38:15.498280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:55.754 [2024-11-29 18:38:15.498285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.754 [2024-11-29 18:38:15.498291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:55.754 [2024-11-29 18:38:15.498298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:55.754 [2024-11-29 18:38:15.498304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.754 [2024-11-29 18:38:15.498310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.754 [2024-11-29 18:38:15.498317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:55.754 [2024-11-29 18:38:15.498323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:55.754 [2024-11-29 18:38:15.498328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:55.754 [2024-11-29 18:38:15.498334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:55.754 [2024-11-29 18:38:15.498339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:55.754 [2024-11-29 18:38:15.498345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:55.754 [2024-11-29 18:38:15.498354] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:55.754 [2024-11-29 18:38:15.498364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:55.754 [2024-11-29 18:38:15.498378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:55.754 [2024-11-29 18:38:15.498383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:55.754 [2024-11-29 18:38:15.498389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:55.754 [2024-11-29 18:38:15.498395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:55.754 [2024-11-29 18:38:15.498401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:55.754 [2024-11-29 18:38:15.498408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:55.754 [2024-11-29 18:38:15.498414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:55.754 [2024-11-29 18:38:15.498421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:55.754 [2024-11-29 18:38:15.498428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:55.754 [2024-11-29 18:38:15.498473] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:55.754 [2024-11-29 18:38:15.498483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:55.754 [2024-11-29 18:38:15.498495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:55.754 [2024-11-29 18:38:15.498501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:55.754 [2024-11-29 18:38:15.498507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:55.754 [2024-11-29 18:38:15.498513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.498519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:55.754 [2024-11-29 18:38:15.498525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:24:55.754 [2024-11-29 18:38:15.498530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.509464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.509492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.754 [2024-11-29 18:38:15.509500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.901 ms 00:24:55.754 [2024-11-29 18:38:15.509509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.509578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.509587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:55.754 [2024-11-29 18:38:15.509593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:55.754 [2024-11-29 18:38:15.509598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.540392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.540524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.754 [2024-11-29 18:38:15.540549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.758 ms 00:24:55.754 [2024-11-29 18:38:15.540558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.540609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.540619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.754 [2024-11-29 18:38:15.540632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:55.754 [2024-11-29 18:38:15.540640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.541080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.541109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.754 [2024-11-29 18:38:15.541120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:24:55.754 [2024-11-29 18:38:15.541129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.541268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.541279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.754 [2024-11-29 18:38:15.541288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:55.754 [2024-11-29 18:38:15.541301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.547905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.547931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.754 [2024-11-29 18:38:15.547945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:24:55.754 [2024-11-29 18:38:15.547953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.550908] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:55.754 [2024-11-29 18:38:15.550932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:55.754 [2024-11-29 18:38:15.550945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.550952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:55.754 [2024-11-29 18:38:15.550959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:24:55.754 [2024-11-29 18:38:15.550964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.562267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.562290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:55.754 [2024-11-29 18:38:15.562304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.220 ms 00:24:55.754 [2024-11-29 18:38:15.562312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.564064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.564085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:55.754 [2024-11-29 18:38:15.564092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:24:55.754 [2024-11-29 18:38:15.564097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.565503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.565518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:55.754 [2024-11-29 18:38:15.565525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:24:55.754 [2024-11-29 18:38:15.565531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.565791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.565800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:55.754 [2024-11-29 18:38:15.565807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:55.754 [2024-11-29 18:38:15.565815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.583710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.583736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:55.754 [2024-11-29 18:38:15.583745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.882 ms 00:24:55.754 [2024-11-29 18:38:15.583755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.589747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:55.754 [2024-11-29 18:38:15.592083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.592104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:55.754 [2024-11-29 18:38:15.592113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.296 ms 00:24:55.754 [2024-11-29 18:38:15.592120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.592185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.592194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:55.754 [2024-11-29 18:38:15.592203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:55.754 [2024-11-29 18:38:15.592209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.592263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.592272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:55.754 [2024-11-29 18:38:15.592278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:55.754 [2024-11-29 18:38:15.592287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.592303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.592310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:55.754 [2024-11-29 18:38:15.592320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:55.754 [2024-11-29 18:38:15.592328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.592357] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:55.754 [2024-11-29 18:38:15.592365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.592371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:55.754 [2024-11-29 18:38:15.592379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:55.754 [2024-11-29 18:38:15.592385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.595809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.595833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:55.754 [2024-11-29 18:38:15.595846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.411 ms 00:24:55.754 [2024-11-29 18:38:15.595854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.595913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.754 [2024-11-29 18:38:15.595921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:55.754 [2024-11-29 18:38:15.595928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:55.754 [2024-11-29 18:38:15.595934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.754 [2024-11-29 18:38:15.596785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.449 ms, result 0 00:24:57.126  [2024-11-29T18:38:17.964Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-29T18:38:18.898Z] Copying: 40/1024 [MB] (19 MBps) [2024-11-29T18:38:19.841Z] Copying: 51/1024 [MB] (11 MBps) [2024-11-29T18:38:20.777Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-29T18:38:21.712Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-29T18:38:22.647Z] Copying: 84/1024 [MB] (11 MBps) [2024-11-29T18:38:24.025Z] Copying: 96/1024 [MB] (11 MBps) [2024-11-29T18:38:24.960Z] Copying: 107/1024 [MB] (11 MBps) [2024-11-29T18:38:25.894Z] Copying: 119/1024 [MB] (11 MBps) [2024-11-29T18:38:26.829Z] Copying: 130/1024 [MB] (11 MBps) [2024-11-29T18:38:27.763Z] Copying: 142/1024 [MB] (11 MBps) [2024-11-29T18:38:28.697Z] Copying: 153/1024 [MB] (11 MBps) [2024-11-29T18:38:29.632Z] Copying: 165/1024 [MB] (11 MBps) [2024-11-29T18:38:31.004Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-29T18:38:31.946Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-29T18:38:32.909Z] Copying: 198/1024 [MB] (10 MBps) [2024-11-29T18:38:33.913Z] Copying: 210/1024 [MB] (11 MBps) [2024-11-29T18:38:34.848Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-29T18:38:35.784Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-29T18:38:36.722Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-29T18:38:37.664Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-29T18:38:39.049Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-29T18:38:39.622Z] Copying: 285/1024 [MB] (19 MBps) [2024-11-29T18:38:41.008Z] Copying: 331/1024 [MB] (46 MBps) [2024-11-29T18:38:41.952Z] Copying: 360/1024 [MB] (28 MBps) [2024-11-29T18:38:42.897Z] Copying: 384/1024 [MB] (23 MBps) [2024-11-29T18:38:43.840Z] Copying: 419/1024 [MB] (35 MBps) [2024-11-29T18:38:44.783Z] Copying: 464/1024 [MB] (44 MBps) [2024-11-29T18:38:45.726Z] Copying: 491/1024 [MB] (26 MBps) [2024-11-29T18:38:46.670Z] Copying: 509/1024 [MB] (18 MBps) [2024-11-29T18:38:47.615Z] Copying: 528/1024 [MB] (18 MBps) [2024-11-29T18:38:49.002Z] Copying: 540/1024 [MB] (12 MBps) [2024-11-29T18:38:49.947Z] Copying: 552/1024 [MB] (11 MBps) [2024-11-29T18:38:50.892Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-29T18:38:51.835Z] Copying: 576/1024 [MB] (12 MBps) [2024-11-29T18:38:52.779Z] Copying: 615/1024 [MB] (38 MBps) [2024-11-29T18:38:53.724Z] Copying: 642/1024 [MB] (27 MBps) [2024-11-29T18:38:54.669Z] Copying: 659/1024 [MB] (17 MBps) [2024-11-29T18:38:55.611Z] Copying: 673/1024 [MB] (14 MBps) [2024-11-29T18:38:56.999Z] Copying: 692/1024 [MB] (18 MBps) [2024-11-29T18:38:57.944Z] Copying: 709/1024 [MB] (16 MBps) [2024-11-29T18:38:58.888Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-29T18:38:59.832Z] Copying: 744/1024 [MB] (19 MBps) [2024-11-29T18:39:00.775Z] Copying: 754/1024 [MB] (10 MBps) [2024-11-29T18:39:01.719Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-29T18:39:02.664Z] Copying: 803/1024 [MB] (33 MBps) [2024-11-29T18:39:04.051Z] Copying: 829/1024 [MB] (25 MBps) [2024-11-29T18:39:04.624Z] Copying: 856/1024 [MB] (27 MBps) [2024-11-29T18:39:06.094Z] Copying: 882/1024 [MB] (25 MBps) [2024-11-29T18:39:06.667Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-29T18:39:07.612Z] Copying: 917/1024 [MB] (16 MBps) [2024-11-29T18:39:09.001Z] Copying: 936/1024 [MB] (19 MBps) [2024-11-29T18:39:09.942Z] Copying: 956/1024 [MB] (19 MBps) [2024-11-29T18:39:10.885Z] Copying: 976/1024 [MB] (19 MBps) [2024-11-29T18:39:11.829Z] Copying: 995/1024 [MB] (19 MBps) [2024-11-29T18:39:12.774Z] Copying: 1015/1024 [MB] (19 MBps) [2024-11-29T18:39:13.036Z] Copying: 1048220/1048576 [kB] (8600 kBps) [2024-11-29T18:39:13.036Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 18:39:12.937667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.131 [2024-11-29 18:39:12.937896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.131 [2024-11-29 18:39:12.937923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.131 [2024-11-29 18:39:12.937934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.131 [2024-11-29 18:39:12.941404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.131 [2024-11-29 18:39:12.943232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.131 [2024-11-29 18:39:12.943286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.131 [2024-11-29 18:39:12.943299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:25:53.131 [2024-11-29 18:39:12.943313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.131 [2024-11-29 18:39:12.953657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.131 [2024-11-29 18:39:12.953714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.131 [2024-11-29 18:39:12.953726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.541 ms 00:25:53.131 [2024-11-29 18:39:12.953734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.131 [2024-11-29 18:39:12.977364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.131 [2024-11-29 18:39:12.977430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.131 [2024-11-29 18:39:12.977447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.593 ms 00:25:53.131 [2024-11-29 18:39:12.977471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.131 [2024-11-29 18:39:12.983830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.132 [2024-11-29 18:39:12.983878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.132 [2024-11-29 18:39:12.983890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.321 ms 00:25:53.132 [2024-11-29 18:39:12.983899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.132 [2024-11-29 18:39:12.986880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.132 [2024-11-29 18:39:12.986937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.132 [2024-11-29 18:39:12.986948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:25:53.132 [2024-11-29 18:39:12.986955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.132 [2024-11-29 18:39:12.991929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.132 [2024-11-29 18:39:12.991985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.132 [2024-11-29 18:39:12.992010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 00:25:53.132 [2024-11-29 18:39:12.992023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.135412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.395 [2024-11-29 18:39:13.135493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.395 [2024-11-29 18:39:13.135507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.342 ms 00:25:53.395 [2024-11-29 18:39:13.135515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.138226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.395 [2024-11-29 18:39:13.138276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.395 [2024-11-29 18:39:13.138285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:25:53.395 [2024-11-29 18:39:13.138294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.140328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.395 [2024-11-29 18:39:13.140378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.395 [2024-11-29 18:39:13.140389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:25:53.395 [2024-11-29 18:39:13.140397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.141976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.395 [2024-11-29 18:39:13.142033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.395 [2024-11-29 18:39:13.142043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:25:53.395 [2024-11-29 18:39:13.142051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.143746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.395 [2024-11-29 18:39:13.143791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.395 [2024-11-29 18:39:13.143801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:25:53.395 [2024-11-29 18:39:13.143809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.395 [2024-11-29 18:39:13.143847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.395 [2024-11-29 18:39:13.143869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102656 / 261120 wr_cnt: 1 state: open 00:25:53.395 [2024-11-29 18:39:13.143884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.143998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.395 [2024-11-29 18:39:13.144406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.396 [2024-11-29 18:39:13.144696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.396 [2024-11-29 18:39:13.144705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dae69cb0-b2aa-49e1-af84-b0dd5275db04 00:25:53.396 [2024-11-29 18:39:13.144714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102656 00:25:53.396 [2024-11-29 18:39:13.144722] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103616 00:25:53.396 [2024-11-29 18:39:13.144730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102656 00:25:53.396 [2024-11-29 18:39:13.144750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:25:53.396 [2024-11-29 18:39:13.144757] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.396 [2024-11-29 18:39:13.144765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.396 [2024-11-29 18:39:13.144773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.396 [2024-11-29 18:39:13.144780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.396 [2024-11-29 18:39:13.144787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.396 [2024-11-29 18:39:13.144795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.396 [2024-11-29 18:39:13.144803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.396 [2024-11-29 18:39:13.144814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:25:53.396 [2024-11-29 18:39:13.144826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.147233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.396 [2024-11-29 18:39:13.147275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.396 [2024-11-29 18:39:13.147286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:25:53.396 [2024-11-29 18:39:13.147294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.147426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.396 [2024-11-29 18:39:13.147437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.396 [2024-11-29 18:39:13.147447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:53.396 [2024-11-29 18:39:13.147475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.155000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.155047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.396 [2024-11-29 18:39:13.155058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.155066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.155131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.155141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.396 [2024-11-29 18:39:13.155152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.155160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.155217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.155229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.396 [2024-11-29 18:39:13.155237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.155244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.155259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.155270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.396 [2024-11-29 18:39:13.155280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.155287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.168097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.168150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.396 [2024-11-29 18:39:13.168162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.168170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.177880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.177939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.396 [2024-11-29 18:39:13.177949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.177958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.396 [2024-11-29 18:39:13.178032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.396 [2024-11-29 18:39:13.178089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.396 [2024-11-29 18:39:13.178183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.396 [2024-11-29 18:39:13.178247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.396 [2024-11-29 18:39:13.178329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.396 [2024-11-29 18:39:13.178397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.396 [2024-11-29 18:39:13.178408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.396 [2024-11-29 18:39:13.178417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.396 [2024-11-29 18:39:13.178564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 240.905 ms, result 0 00:25:53.969 00:25:53.969 00:25:54.230 18:39:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:56.778 18:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.778 [2024-11-29 18:39:16.195795] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:25:56.778 [2024-11-29 18:39:16.195950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92407 ] 00:25:56.778 [2024-11-29 18:39:16.359636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.778 [2024-11-29 18:39:16.387976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.778 [2024-11-29 18:39:16.497797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.778 [2024-11-29 18:39:16.497873] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.778 [2024-11-29 18:39:16.659114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.659171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.778 [2024-11-29 18:39:16.659186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:56.778 [2024-11-29 18:39:16.659195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.659251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.659261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.778 [2024-11-29 18:39:16.659273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:56.778 [2024-11-29 18:39:16.659286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.659310] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.778 [2024-11-29 18:39:16.659720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.778 [2024-11-29 18:39:16.659763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.659772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.778 [2024-11-29 18:39:16.659787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:25:56.778 [2024-11-29 18:39:16.659795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.661567] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.778 [2024-11-29 18:39:16.665367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.665414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.778 [2024-11-29 18:39:16.665431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:25:56.778 [2024-11-29 18:39:16.665442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.665534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.665545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.778 [2024-11-29 18:39:16.665555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:56.778 [2024-11-29 18:39:16.665568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.673744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.673793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.778 [2024-11-29 18:39:16.673806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.133 ms 00:25:56.778 [2024-11-29 18:39:16.673818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.673922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.673933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.778 [2024-11-29 18:39:16.673945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:56.778 [2024-11-29 18:39:16.673953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.674013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.674043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.778 [2024-11-29 18:39:16.674052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:56.778 [2024-11-29 18:39:16.674064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.674088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.778 [2024-11-29 18:39:16.676164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.676196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.778 [2024-11-29 18:39:16.676210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:25:56.778 [2024-11-29 18:39:16.676217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.676250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.778 [2024-11-29 18:39:16.676259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.778 [2024-11-29 18:39:16.676267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:56.778 [2024-11-29 18:39:16.676277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.778 [2024-11-29 18:39:16.676304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.778 [2024-11-29 18:39:16.676325] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.778 [2024-11-29 18:39:16.676365] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.778 [2024-11-29 18:39:16.676381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.778 [2024-11-29 18:39:16.676506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.778 [2024-11-29 18:39:16.676519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.778 [2024-11-29 18:39:16.676534] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.778 [2024-11-29 18:39:16.676545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.778 [2024-11-29 18:39:16.676555] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.778 [2024-11-29 18:39:16.676568] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:56.778 [2024-11-29 18:39:16.676579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.779 [2024-11-29 18:39:16.676586] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.779 [2024-11-29 18:39:16.676594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.779 [2024-11-29 18:39:16.676606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.779 [2024-11-29 18:39:16.676614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.779 [2024-11-29 18:39:16.676622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:25:56.779 [2024-11-29 18:39:16.676630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.779 [2024-11-29 18:39:16.676716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.779 [2024-11-29 18:39:16.676724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.779 [2024-11-29 18:39:16.676732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:56.779 [2024-11-29 18:39:16.676739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.779 [2024-11-29 18:39:16.676840] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.779 [2024-11-29 18:39:16.676856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.779 [2024-11-29 18:39:16.676866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.779 [2024-11-29 18:39:16.676881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.676897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.779 [2024-11-29 18:39:16.676905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.676914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:56.779 [2024-11-29 18:39:16.676923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.779 [2024-11-29 18:39:16.676931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:56.779 [2024-11-29 18:39:16.676939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.779 [2024-11-29 18:39:16.676947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.779 [2024-11-29 18:39:16.676956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:56.779 [2024-11-29 18:39:16.676964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.779 [2024-11-29 18:39:16.676971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.779 [2024-11-29 18:39:16.676979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:56.779 [2024-11-29 18:39:16.676991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.676999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.779 [2024-11-29 18:39:16.677006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.779 [2024-11-29 18:39:16.677033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.779 [2024-11-29 18:39:16.677057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.779 [2024-11-29 18:39:16.677081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.779 [2024-11-29 18:39:16.677103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.779 [2024-11-29 18:39:16.677127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.779 [2024-11-29 18:39:16.677141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.779 [2024-11-29 18:39:16.677149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:56.779 [2024-11-29 18:39:16.677159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.779 [2024-11-29 18:39:16.677167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.779 [2024-11-29 18:39:16.677175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:56.779 [2024-11-29 18:39:16.677182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.779 [2024-11-29 18:39:16.677194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:56.779 [2024-11-29 18:39:16.677201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677208] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.779 [2024-11-29 18:39:16.677222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.779 [2024-11-29 18:39:16.677229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.779 [2024-11-29 18:39:16.677252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.779 [2024-11-29 18:39:16.677259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.779 [2024-11-29 18:39:16.677266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.779 [2024-11-29 18:39:16.677273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.779 [2024-11-29 18:39:16.677280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.779 [2024-11-29 18:39:16.677289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.779 [2024-11-29 18:39:16.677298] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.779 [2024-11-29 18:39:16.677308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.779 [2024-11-29 18:39:16.677317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:56.779 [2024-11-29 18:39:16.677324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:56.779 [2024-11-29 18:39:16.677331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:56.779 [2024-11-29 18:39:16.677338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:56.779 [2024-11-29 18:39:16.677346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:56.779 [2024-11-29 18:39:16.677353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:56.779 [2024-11-29 18:39:16.677360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:56.779 [2024-11-29 18:39:16.677367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:56.779 [2024-11-29 18:39:16.677375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:56.779 [2024-11-29 18:39:16.677382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:56.779 [2024-11-29 18:39:16.677388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:56.779 [2024-11-29 18:39:16.677395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:56.779 [2024-11-29 18:39:16.677403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:56.780 [2024-11-29 18:39:16.677413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:56.780 [2024-11-29 18:39:16.677421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.780 [2024-11-29 18:39:16.677429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.780 [2024-11-29 18:39:16.677437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.780 [2024-11-29 18:39:16.677445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.780 [2024-11-29 18:39:16.677468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.780 [2024-11-29 18:39:16.677476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.780 [2024-11-29 18:39:16.677484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.780 [2024-11-29 18:39:16.677492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.780 [2024-11-29 18:39:16.677501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:25:56.780 [2024-11-29 18:39:16.677512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.041 [2024-11-29 18:39:16.691713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.041 [2024-11-29 18:39:16.691755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.041 [2024-11-29 18:39:16.691768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.156 ms 00:25:57.041 [2024-11-29 18:39:16.691777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.041 [2024-11-29 18:39:16.691869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.041 [2024-11-29 18:39:16.691889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.042 [2024-11-29 18:39:16.691900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:57.042 [2024-11-29 18:39:16.691909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.713535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.713582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.042 [2024-11-29 18:39:16.713595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.567 ms 00:25:57.042 [2024-11-29 18:39:16.713604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.713651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.713661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.042 [2024-11-29 18:39:16.713676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.042 [2024-11-29 18:39:16.713684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.714283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.714319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.042 [2024-11-29 18:39:16.714331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:25:57.042 [2024-11-29 18:39:16.714341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.714524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.714537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.042 [2024-11-29 18:39:16.714547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:25:57.042 [2024-11-29 18:39:16.714557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.722208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.722249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.042 [2024-11-29 18:39:16.722266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.623 ms 00:25:57.042 [2024-11-29 18:39:16.722276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.726081] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:57.042 [2024-11-29 18:39:16.726129] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.042 [2024-11-29 18:39:16.726146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.726155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.042 [2024-11-29 18:39:16.726166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.770 ms 00:25:57.042 [2024-11-29 18:39:16.726174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.741735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.741778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.042 [2024-11-29 18:39:16.741790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.506 ms 00:25:57.042 [2024-11-29 18:39:16.741799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.744650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.744691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.042 [2024-11-29 18:39:16.744701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:25:57.042 [2024-11-29 18:39:16.744717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.747406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.747444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.042 [2024-11-29 18:39:16.747474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:25:57.042 [2024-11-29 18:39:16.747482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.747896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.747921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.042 [2024-11-29 18:39:16.747937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:25:57.042 [2024-11-29 18:39:16.747946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.772996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.773043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.042 [2024-11-29 18:39:16.773055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.026 ms 00:25:57.042 [2024-11-29 18:39:16.773069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.781131] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.042 [2024-11-29 18:39:16.784200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.784235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.042 [2024-11-29 18:39:16.784247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.083 ms 00:25:57.042 [2024-11-29 18:39:16.784255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.784336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.784347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.042 [2024-11-29 18:39:16.784363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:57.042 [2024-11-29 18:39:16.784371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.786138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.786177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.042 [2024-11-29 18:39:16.786187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:25:57.042 [2024-11-29 18:39:16.786195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.786222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.786231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.042 [2024-11-29 18:39:16.786246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.042 [2024-11-29 18:39:16.786254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.786293] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.042 [2024-11-29 18:39:16.786304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.786313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.042 [2024-11-29 18:39:16.786325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:57.042 [2024-11-29 18:39:16.786332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.791188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.791234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.042 [2024-11-29 18:39:16.791246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:25:57.042 [2024-11-29 18:39:16.791254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.791336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.042 [2024-11-29 18:39:16.791351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.042 [2024-11-29 18:39:16.791360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:57.042 [2024-11-29 18:39:16.791371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.042 [2024-11-29 18:39:16.792609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.024 ms, result 0 00:25:58.431  [2024-11-29T18:39:19.282Z] Copying: 1040/1048576 [kB] (1040 kBps) [2024-11-29T18:39:20.226Z] Copying: 4300/1048576 [kB] (3260 kBps) [2024-11-29T18:39:21.171Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-29T18:39:22.114Z] Copying: 46/1024 [MB] (32 MBps) [2024-11-29T18:39:23.063Z] Copying: 74/1024 [MB] (27 MBps) [2024-11-29T18:39:24.004Z] Copying: 99/1024 [MB] (25 MBps) [2024-11-29T18:39:25.387Z] Copying: 127/1024 [MB] (27 MBps) [2024-11-29T18:39:26.330Z] Copying: 149/1024 [MB] (21 MBps) [2024-11-29T18:39:27.275Z] Copying: 175/1024 [MB] (26 MBps) [2024-11-29T18:39:28.217Z] Copying: 202/1024 [MB] (26 MBps) [2024-11-29T18:39:29.159Z] Copying: 233/1024 [MB] (30 MBps) [2024-11-29T18:39:30.103Z] Copying: 263/1024 [MB] (29 MBps) [2024-11-29T18:39:31.048Z] Copying: 290/1024 [MB] (27 MBps) [2024-11-29T18:39:31.993Z] Copying: 316/1024 [MB] (26 MBps) [2024-11-29T18:39:33.382Z] Copying: 365/1024 [MB] (48 MBps) [2024-11-29T18:39:34.329Z] Copying: 393/1024 [MB] (28 MBps) [2024-11-29T18:39:35.294Z] Copying: 425/1024 [MB] (32 MBps) [2024-11-29T18:39:36.242Z] Copying: 453/1024 [MB] (27 MBps) [2024-11-29T18:39:37.185Z] Copying: 480/1024 [MB] (27 MBps) [2024-11-29T18:39:38.189Z] Copying: 509/1024 [MB] (28 MBps) [2024-11-29T18:39:39.129Z] Copying: 532/1024 [MB] (23 MBps) [2024-11-29T18:39:40.073Z] Copying: 559/1024 [MB] (26 MBps) [2024-11-29T18:39:41.018Z] Copying: 588/1024 [MB] (29 MBps) [2024-11-29T18:39:42.407Z] Copying: 619/1024 [MB] (30 MBps) [2024-11-29T18:39:43.351Z] Copying: 648/1024 [MB] (29 MBps) [2024-11-29T18:39:44.296Z] Copying: 679/1024 [MB] (31 MBps) [2024-11-29T18:39:45.239Z] Copying: 710/1024 [MB] (30 MBps) [2024-11-29T18:39:46.179Z] Copying: 750/1024 [MB] (40 MBps) [2024-11-29T18:39:47.123Z] Copying: 791/1024 [MB] (40 MBps) [2024-11-29T18:39:48.068Z] Copying: 825/1024 [MB] (33 MBps) [2024-11-29T18:39:49.014Z] Copying: 855/1024 [MB] (30 MBps) [2024-11-29T18:39:50.401Z] Copying: 884/1024 [MB] (28 MBps) [2024-11-29T18:39:51.346Z] Copying: 915/1024 [MB] (31 MBps) [2024-11-29T18:39:52.292Z] Copying: 944/1024 [MB] (28 MBps) [2024-11-29T18:39:53.236Z] Copying: 971/1024 [MB] (27 MBps) [2024-11-29T18:39:53.500Z] Copying: 1011/1024 [MB] (39 MBps) [2024-11-29T18:39:53.500Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-29 18:39:53.392232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.392305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.595 [2024-11-29 18:39:53.392322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.595 [2024-11-29 18:39:53.392331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.392364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.595 [2024-11-29 18:39:53.393155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.393194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.595 [2024-11-29 18:39:53.393310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:26:33.595 [2024-11-29 18:39:53.393319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.393567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.393591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.595 [2024-11-29 18:39:53.393600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:26:33.595 [2024-11-29 18:39:53.393608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.406570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.406620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.595 [2024-11-29 18:39:53.406639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:26:33.595 [2024-11-29 18:39:53.406648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.413040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.413085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:33.595 [2024-11-29 18:39:53.413098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.350 ms 00:26:33.595 [2024-11-29 18:39:53.413106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.416180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.416233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.595 [2024-11-29 18:39:53.416243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:26:33.595 [2024-11-29 18:39:53.416252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.422147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.595 [2024-11-29 18:39:53.422207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.595 [2024-11-29 18:39:53.422223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:26:33.595 [2024-11-29 18:39:53.422232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.595 [2024-11-29 18:39:53.425346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.596 [2024-11-29 18:39:53.425390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.596 [2024-11-29 18:39:53.425412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:26:33.596 [2024-11-29 18:39:53.425430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.596 [2024-11-29 18:39:53.428881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.596 [2024-11-29 18:39:53.428931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:33.596 [2024-11-29 18:39:53.428942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:26:33.596 [2024-11-29 18:39:53.428950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.596 [2024-11-29 18:39:53.432076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.596 [2024-11-29 18:39:53.432126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:33.596 [2024-11-29 18:39:53.432136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:26:33.596 [2024-11-29 18:39:53.432145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.596 [2024-11-29 18:39:53.434878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.596 [2024-11-29 18:39:53.434940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.596 [2024-11-29 18:39:53.434951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:26:33.596 [2024-11-29 18:39:53.434958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.596 [2024-11-29 18:39:53.437215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.596 [2024-11-29 18:39:53.437263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.596 [2024-11-29 18:39:53.437274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:26:33.596 [2024-11-29 18:39:53.437281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.596 [2024-11-29 18:39:53.437478] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.596 [2024-11-29 18:39:53.437497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:33.596 [2024-11-29 18:39:53.437508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:33.596 [2024-11-29 18:39:53.437518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.437998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.596 [2024-11-29 18:39:53.438062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.597 [2024-11-29 18:39:53.438302] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.597 [2024-11-29 18:39:53.438317] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dae69cb0-b2aa-49e1-af84-b0dd5275db04 00:26:33.597 [2024-11-29 18:39:53.438327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:33.597 [2024-11-29 18:39:53.438334] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161984 00:26:33.597 [2024-11-29 18:39:53.438347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160000 00:26:33.597 [2024-11-29 18:39:53.438355] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:26:33.597 [2024-11-29 18:39:53.438364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.597 [2024-11-29 18:39:53.438372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.597 [2024-11-29 18:39:53.438380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.597 [2024-11-29 18:39:53.438387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.597 [2024-11-29 18:39:53.438401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.597 [2024-11-29 18:39:53.438408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.597 [2024-11-29 18:39:53.438416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.597 [2024-11-29 18:39:53.438426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:26:33.597 [2024-11-29 18:39:53.438433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.440971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.597 [2024-11-29 18:39:53.441008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.597 [2024-11-29 18:39:53.441019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.504 ms 00:26:33.597 [2024-11-29 18:39:53.441035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.441224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.597 [2024-11-29 18:39:53.441249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.597 [2024-11-29 18:39:53.441260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:33.597 [2024-11-29 18:39:53.441268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.449272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.449325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.597 [2024-11-29 18:39:53.449338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.449347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.449410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.449427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.597 [2024-11-29 18:39:53.449441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.449478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.449540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.449550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.597 [2024-11-29 18:39:53.449558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.449570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.449590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.449599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.597 [2024-11-29 18:39:53.449685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.449693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.463437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.463509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.597 [2024-11-29 18:39:53.463521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.463530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.474766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.474837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.597 [2024-11-29 18:39:53.474854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.474863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.474916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.474925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.597 [2024-11-29 18:39:53.474934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.474943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.474979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.474988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.597 [2024-11-29 18:39:53.474997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.475006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.475085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.475096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.597 [2024-11-29 18:39:53.475105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.475113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.475151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.475161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:33.597 [2024-11-29 18:39:53.475170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.475178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.475221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.475230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.597 [2024-11-29 18:39:53.475239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.475247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.475293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.597 [2024-11-29 18:39:53.475303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.597 [2024-11-29 18:39:53.475312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.597 [2024-11-29 18:39:53.475320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.597 [2024-11-29 18:39:53.475481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.192 ms, result 0 00:26:33.858 00:26:33.858 00:26:33.858 18:39:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:35.773 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:35.773 18:39:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:35.773 [2024-11-29 18:39:55.352223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:26:35.773 [2024-11-29 18:39:55.352327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92806 ] 00:26:35.773 [2024-11-29 18:39:55.502985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.773 [2024-11-29 18:39:55.523360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.773 [2024-11-29 18:39:55.623173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:35.773 [2024-11-29 18:39:55.623250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:36.035 [2024-11-29 18:39:55.783693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.783755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:36.035 [2024-11-29 18:39:55.783770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:36.035 [2024-11-29 18:39:55.783778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.783838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.783850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.035 [2024-11-29 18:39:55.783859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:36.035 [2024-11-29 18:39:55.783867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.783895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:36.035 [2024-11-29 18:39:55.784293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:36.035 [2024-11-29 18:39:55.784331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.784340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.035 [2024-11-29 18:39:55.784353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:26:36.035 [2024-11-29 18:39:55.784361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.786084] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:36.035 [2024-11-29 18:39:55.789915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.789974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:36.035 [2024-11-29 18:39:55.789990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:26:36.035 [2024-11-29 18:39:55.790002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.790098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.790110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:36.035 [2024-11-29 18:39:55.790119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:36.035 [2024-11-29 18:39:55.790127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.798107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.798149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.035 [2024-11-29 18:39:55.798167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.938 ms 00:26:36.035 [2024-11-29 18:39:55.798174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.798269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.798278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.035 [2024-11-29 18:39:55.798290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:36.035 [2024-11-29 18:39:55.798298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.798359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.798370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:36.035 [2024-11-29 18:39:55.798379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:36.035 [2024-11-29 18:39:55.798396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.035 [2024-11-29 18:39:55.798419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:36.035 [2024-11-29 18:39:55.800403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.035 [2024-11-29 18:39:55.800440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.035 [2024-11-29 18:39:55.800468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:26:36.036 [2024-11-29 18:39:55.800477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.036 [2024-11-29 18:39:55.800517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.036 [2024-11-29 18:39:55.800533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:36.036 [2024-11-29 18:39:55.800545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:36.036 [2024-11-29 18:39:55.800555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.036 [2024-11-29 18:39:55.800578] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:36.036 [2024-11-29 18:39:55.800600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:36.036 [2024-11-29 18:39:55.800637] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:36.036 [2024-11-29 18:39:55.800653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:36.036 [2024-11-29 18:39:55.800764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:36.036 [2024-11-29 18:39:55.800776] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:36.036 [2024-11-29 18:39:55.800791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:36.036 [2024-11-29 18:39:55.800801] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:36.036 [2024-11-29 18:39:55.800811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:36.036 [2024-11-29 18:39:55.800819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:36.036 [2024-11-29 18:39:55.800826] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:36.036 [2024-11-29 18:39:55.800834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:36.036 [2024-11-29 18:39:55.800842] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:36.036 [2024-11-29 18:39:55.800850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.036 [2024-11-29 18:39:55.800860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:36.036 [2024-11-29 18:39:55.800868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:36.036 [2024-11-29 18:39:55.800875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.036 [2024-11-29 18:39:55.800960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.036 [2024-11-29 18:39:55.800972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:36.036 [2024-11-29 18:39:55.800980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:36.036 [2024-11-29 18:39:55.800990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.036 [2024-11-29 18:39:55.801095] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:36.036 [2024-11-29 18:39:55.801106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:36.036 [2024-11-29 18:39:55.801115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:36.036 [2024-11-29 18:39:55.801151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:36.036 [2024-11-29 18:39:55.801175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.036 [2024-11-29 18:39:55.801193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:36.036 [2024-11-29 18:39:55.801202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:36.036 [2024-11-29 18:39:55.801211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.036 [2024-11-29 18:39:55.801219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:36.036 [2024-11-29 18:39:55.801227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:36.036 [2024-11-29 18:39:55.801235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:36.036 [2024-11-29 18:39:55.801250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:36.036 [2024-11-29 18:39:55.801273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:36.036 [2024-11-29 18:39:55.801296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:36.036 [2024-11-29 18:39:55.801323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:36.036 [2024-11-29 18:39:55.801346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:36.036 [2024-11-29 18:39:55.801369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.036 [2024-11-29 18:39:55.801384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:36.036 [2024-11-29 18:39:55.801392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:36.036 [2024-11-29 18:39:55.801399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.036 [2024-11-29 18:39:55.801407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:36.036 [2024-11-29 18:39:55.801415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:36.036 [2024-11-29 18:39:55.801423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:36.036 [2024-11-29 18:39:55.801441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:36.036 [2024-11-29 18:39:55.801448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801471] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:36.036 [2024-11-29 18:39:55.801483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:36.036 [2024-11-29 18:39:55.801491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.036 [2024-11-29 18:39:55.801507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:36.036 [2024-11-29 18:39:55.801514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:36.036 [2024-11-29 18:39:55.801522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:36.036 [2024-11-29 18:39:55.801529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:36.036 [2024-11-29 18:39:55.801535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:36.036 [2024-11-29 18:39:55.801543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:36.036 [2024-11-29 18:39:55.801551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:36.036 [2024-11-29 18:39:55.801560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.036 [2024-11-29 18:39:55.801568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:36.036 [2024-11-29 18:39:55.801576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:36.036 [2024-11-29 18:39:55.801586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:36.036 [2024-11-29 18:39:55.801594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:36.036 [2024-11-29 18:39:55.801602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:36.036 [2024-11-29 18:39:55.801609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:36.036 [2024-11-29 18:39:55.801617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:36.036 [2024-11-29 18:39:55.801624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:36.036 [2024-11-29 18:39:55.801631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:36.036 [2024-11-29 18:39:55.801638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:36.036 [2024-11-29 18:39:55.801645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:36.036 [2024-11-29 18:39:55.801651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:36.036 [2024-11-29 18:39:55.801658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:36.036 [2024-11-29 18:39:55.801665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:36.036 [2024-11-29 18:39:55.801679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:36.036 [2024-11-29 18:39:55.801688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.037 [2024-11-29 18:39:55.801696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:36.037 [2024-11-29 18:39:55.801703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:36.037 [2024-11-29 18:39:55.801712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:36.037 [2024-11-29 18:39:55.801720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:36.037 [2024-11-29 18:39:55.801728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.801742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:36.037 [2024-11-29 18:39:55.801750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:26:36.037 [2024-11-29 18:39:55.801761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.816688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.816733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.037 [2024-11-29 18:39:55.816751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.883 ms 00:26:36.037 [2024-11-29 18:39:55.816760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.816847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.816858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:36.037 [2024-11-29 18:39:55.816866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:36.037 [2024-11-29 18:39:55.816879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.837726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.837782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.037 [2024-11-29 18:39:55.837796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.790 ms 00:26:36.037 [2024-11-29 18:39:55.837806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.837854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.837877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.037 [2024-11-29 18:39:55.837890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.037 [2024-11-29 18:39:55.837898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.838540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.838584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.037 [2024-11-29 18:39:55.838595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:26:36.037 [2024-11-29 18:39:55.838605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.838776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.838787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.037 [2024-11-29 18:39:55.838797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:26:36.037 [2024-11-29 18:39:55.838807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.847139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.847191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.037 [2024-11-29 18:39:55.847204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.308 ms 00:26:36.037 [2024-11-29 18:39:55.847213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.851078] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:36.037 [2024-11-29 18:39:55.851130] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:36.037 [2024-11-29 18:39:55.851146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.851154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:36.037 [2024-11-29 18:39:55.851163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:26:36.037 [2024-11-29 18:39:55.851170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.867150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.867197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:36.037 [2024-11-29 18:39:55.867209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.928 ms 00:26:36.037 [2024-11-29 18:39:55.867218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.870153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.870196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:36.037 [2024-11-29 18:39:55.870206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:26:36.037 [2024-11-29 18:39:55.870213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.872856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.872902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:36.037 [2024-11-29 18:39:55.872922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:26:36.037 [2024-11-29 18:39:55.872929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.873270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.873296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:36.037 [2024-11-29 18:39:55.873306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:36.037 [2024-11-29 18:39:55.873313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.897992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.898073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:36.037 [2024-11-29 18:39:55.898087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.653 ms 00:26:36.037 [2024-11-29 18:39:55.898096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.907137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:36.037 [2024-11-29 18:39:55.910202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.910248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:36.037 [2024-11-29 18:39:55.910260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.057 ms 00:26:36.037 [2024-11-29 18:39:55.910268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.910344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.910354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:36.037 [2024-11-29 18:39:55.910364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:36.037 [2024-11-29 18:39:55.910371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.911180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.911226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:36.037 [2024-11-29 18:39:55.911236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:26:36.037 [2024-11-29 18:39:55.911244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.911270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.911279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:36.037 [2024-11-29 18:39:55.911294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:36.037 [2024-11-29 18:39:55.911302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.911341] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:36.037 [2024-11-29 18:39:55.911351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.911361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:36.037 [2024-11-29 18:39:55.911372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:36.037 [2024-11-29 18:39:55.911379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.916702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.916752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:36.037 [2024-11-29 18:39:55.916764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.304 ms 00:26:36.037 [2024-11-29 18:39:55.916773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.916861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.037 [2024-11-29 18:39:55.916871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:36.037 [2024-11-29 18:39:55.916886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:36.037 [2024-11-29 18:39:55.916899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.037 [2024-11-29 18:39:55.918105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.893 ms, result 0 00:26:37.428  [2024-11-29T18:39:58.277Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-29T18:39:59.222Z] Copying: 42/1024 [MB] (19 MBps) [2024-11-29T18:40:00.166Z] Copying: 59/1024 [MB] (16 MBps) [2024-11-29T18:40:01.110Z] Copying: 81/1024 [MB] (22 MBps) [2024-11-29T18:40:02.493Z] Copying: 106/1024 [MB] (24 MBps) [2024-11-29T18:40:03.435Z] Copying: 125/1024 [MB] (18 MBps) [2024-11-29T18:40:04.377Z] Copying: 146/1024 [MB] (21 MBps) [2024-11-29T18:40:05.368Z] Copying: 164/1024 [MB] (17 MBps) [2024-11-29T18:40:06.338Z] Copying: 184/1024 [MB] (20 MBps) [2024-11-29T18:40:07.281Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-29T18:40:08.223Z] Copying: 216/1024 [MB] (19 MBps) [2024-11-29T18:40:09.167Z] Copying: 236/1024 [MB] (20 MBps) [2024-11-29T18:40:10.111Z] Copying: 259/1024 [MB] (23 MBps) [2024-11-29T18:40:11.498Z] Copying: 282/1024 [MB] (22 MBps) [2024-11-29T18:40:12.442Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-29T18:40:13.386Z] Copying: 314/1024 [MB] (13 MBps) [2024-11-29T18:40:14.331Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-29T18:40:15.274Z] Copying: 336/1024 [MB] (10 MBps) [2024-11-29T18:40:16.221Z] Copying: 346/1024 [MB] (10 MBps) [2024-11-29T18:40:17.164Z] Copying: 364/1024 [MB] (17 MBps) [2024-11-29T18:40:18.109Z] Copying: 379/1024 [MB] (15 MBps) [2024-11-29T18:40:19.496Z] Copying: 396/1024 [MB] (17 MBps) [2024-11-29T18:40:20.440Z] Copying: 412/1024 [MB] (15 MBps) [2024-11-29T18:40:21.382Z] Copying: 429/1024 [MB] (17 MBps) [2024-11-29T18:40:22.323Z] Copying: 442/1024 [MB] (13 MBps) [2024-11-29T18:40:23.269Z] Copying: 460/1024 [MB] (18 MBps) [2024-11-29T18:40:24.211Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-29T18:40:25.155Z] Copying: 492/1024 [MB] (17 MBps) [2024-11-29T18:40:26.098Z] Copying: 512/1024 [MB] (20 MBps) [2024-11-29T18:40:27.486Z] Copying: 527/1024 [MB] (14 MBps) [2024-11-29T18:40:28.431Z] Copying: 546/1024 [MB] (19 MBps) [2024-11-29T18:40:29.377Z] Copying: 568/1024 [MB] (21 MBps) [2024-11-29T18:40:30.322Z] Copying: 582/1024 [MB] (14 MBps) [2024-11-29T18:40:31.266Z] Copying: 597/1024 [MB] (14 MBps) [2024-11-29T18:40:32.210Z] Copying: 613/1024 [MB] (16 MBps) [2024-11-29T18:40:33.153Z] Copying: 632/1024 [MB] (18 MBps) [2024-11-29T18:40:34.094Z] Copying: 648/1024 [MB] (16 MBps) [2024-11-29T18:40:35.095Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-29T18:40:36.480Z] Copying: 683/1024 [MB] (15 MBps) [2024-11-29T18:40:37.426Z] Copying: 699/1024 [MB] (15 MBps) [2024-11-29T18:40:38.372Z] Copying: 712/1024 [MB] (12 MBps) [2024-11-29T18:40:39.318Z] Copying: 727/1024 [MB] (15 MBps) [2024-11-29T18:40:40.267Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-29T18:40:41.212Z] Copying: 761/1024 [MB] (14 MBps) [2024-11-29T18:40:42.156Z] Copying: 779/1024 [MB] (17 MBps) [2024-11-29T18:40:43.098Z] Copying: 807/1024 [MB] (28 MBps) [2024-11-29T18:40:44.488Z] Copying: 826/1024 [MB] (18 MBps) [2024-11-29T18:40:45.432Z] Copying: 844/1024 [MB] (18 MBps) [2024-11-29T18:40:46.374Z] Copying: 865/1024 [MB] (20 MBps) [2024-11-29T18:40:47.314Z] Copying: 881/1024 [MB] (15 MBps) [2024-11-29T18:40:48.258Z] Copying: 896/1024 [MB] (14 MBps) [2024-11-29T18:40:49.201Z] Copying: 911/1024 [MB] (14 MBps) [2024-11-29T18:40:50.142Z] Copying: 925/1024 [MB] (14 MBps) [2024-11-29T18:40:51.530Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-29T18:40:52.103Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-29T18:40:53.491Z] Copying: 959/1024 [MB] (11 MBps) [2024-11-29T18:40:54.433Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-29T18:40:55.376Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-29T18:40:56.319Z] Copying: 997/1024 [MB] (16 MBps) [2024-11-29T18:40:56.580Z] Copying: 1015/1024 [MB] (18 MBps) [2024-11-29T18:40:56.841Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 18:40:56.819834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-29 18:40:56.819923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:36.936 [2024-11-29 18:40:56.819947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:36.936 [2024-11-29 18:40:56.819957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-29 18:40:56.819983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:36.936 [2024-11-29 18:40:56.820786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-29 18:40:56.820826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:36.936 [2024-11-29 18:40:56.820839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:27:36.936 [2024-11-29 18:40:56.820849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-29 18:40:56.821093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-29 18:40:56.821104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:36.936 [2024-11-29 18:40:56.821115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:27:36.936 [2024-11-29 18:40:56.821127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-29 18:40:56.825301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-29 18:40:56.825325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:36.936 [2024-11-29 18:40:56.825336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.154 ms 00:27:36.936 [2024-11-29 18:40:56.825344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-29 18:40:56.831627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-29 18:40:56.831665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:36.936 [2024-11-29 18:40:56.831676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.264 ms 00:27:36.936 [2024-11-29 18:40:56.831692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-29 18:40:56.835070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.937 [2024-11-29 18:40:56.835115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:36.937 [2024-11-29 18:40:56.835126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:27:36.937 [2024-11-29 18:40:56.835135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.840040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.840086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:37.200 [2024-11-29 18:40:56.840097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:27:37.200 [2024-11-29 18:40:56.840106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.843670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.843721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:37.200 [2024-11-29 18:40:56.843740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:27:37.200 [2024-11-29 18:40:56.843756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.846549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.846592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:37.200 [2024-11-29 18:40:56.846602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:27:37.200 [2024-11-29 18:40:56.846610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.848690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.848728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:37.200 [2024-11-29 18:40:56.848739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:27:37.200 [2024-11-29 18:40:56.848746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.850383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.850424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:37.200 [2024-11-29 18:40:56.850434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:27:37.200 [2024-11-29 18:40:56.850442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.851889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.200 [2024-11-29 18:40:56.851931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:37.200 [2024-11-29 18:40:56.851941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:27:37.200 [2024-11-29 18:40:56.851948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.200 [2024-11-29 18:40:56.851987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:37.200 [2024-11-29 18:40:56.852003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:37.200 [2024-11-29 18:40:56.852024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:37.200 [2024-11-29 18:40:56.852033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:37.201 [2024-11-29 18:40:56.852761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:37.202 [2024-11-29 18:40:56.852838] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:37.202 [2024-11-29 18:40:56.852848] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dae69cb0-b2aa-49e1-af84-b0dd5275db04 00:27:37.202 [2024-11-29 18:40:56.852856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:37.202 [2024-11-29 18:40:56.852864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:37.202 [2024-11-29 18:40:56.852871] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:37.202 [2024-11-29 18:40:56.852880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:37.202 [2024-11-29 18:40:56.852892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:37.202 [2024-11-29 18:40:56.852900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:37.202 [2024-11-29 18:40:56.852918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:37.202 [2024-11-29 18:40:56.852933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:37.202 [2024-11-29 18:40:56.852940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:37.202 [2024-11-29 18:40:56.852948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.202 [2024-11-29 18:40:56.852956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:37.202 [2024-11-29 18:40:56.852964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:27:37.202 [2024-11-29 18:40:56.852972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.855231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.202 [2024-11-29 18:40:56.855268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:37.202 [2024-11-29 18:40:56.855278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:27:37.202 [2024-11-29 18:40:56.855286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.855414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.202 [2024-11-29 18:40:56.855423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:37.202 [2024-11-29 18:40:56.855432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:37.202 [2024-11-29 18:40:56.855439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.862824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.862864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.202 [2024-11-29 18:40:56.862878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.862886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.862945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.862953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.202 [2024-11-29 18:40:56.862961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.862969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.863032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.863043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.202 [2024-11-29 18:40:56.863057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.863068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.863085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.863093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.202 [2024-11-29 18:40:56.863102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.863109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.876738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.876783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:37.202 [2024-11-29 18:40:56.876800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.876816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.887761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.887808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:37.202 [2024-11-29 18:40:56.887820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.887829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.887878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.887888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:37.202 [2024-11-29 18:40:56.887897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.887906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.887949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.887959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:37.202 [2024-11-29 18:40:56.887968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.887978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.888056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.888067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:37.202 [2024-11-29 18:40:56.888080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.888088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.888123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.888136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:37.202 [2024-11-29 18:40:56.888145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.888153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.888200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.888210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:37.202 [2024-11-29 18:40:56.888224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.888232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.888281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.202 [2024-11-29 18:40:56.888292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:37.202 [2024-11-29 18:40:56.888301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.202 [2024-11-29 18:40:56.888309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.202 [2024-11-29 18:40:56.888448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.575 ms, result 0 00:27:37.463 00:27:37.464 00:27:37.464 18:40:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:39.376 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:39.376 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:39.376 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:39.376 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.376 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:39.637 Process with pid 91046 is not found 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91046 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91046 ']' 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91046 00:27:39.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91046) - No such process 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91046 is not found' 00:27:39.637 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:40.212 Remove shared memory files 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:40.212 ************************************ 00:27:40.212 END TEST ftl_dirty_shutdown 00:27:40.212 ************************************ 00:27:40.212 00:27:40.212 real 3m48.980s 00:27:40.212 user 4m12.387s 00:27:40.212 sys 0m25.725s 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.212 18:40:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:40.212 18:40:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:40.212 18:40:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:40.212 18:40:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.212 18:40:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:40.212 ************************************ 00:27:40.212 START TEST ftl_upgrade_shutdown 00:27:40.212 ************************************ 00:27:40.212 18:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:40.212 * Looking for test storage... 00:27:40.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.212 18:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:40.212 18:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:40.212 18:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.212 --rc genhtml_branch_coverage=1 00:27:40.212 --rc genhtml_function_coverage=1 00:27:40.212 --rc genhtml_legend=1 00:27:40.212 --rc geninfo_all_blocks=1 00:27:40.212 --rc geninfo_unexecuted_blocks=1 00:27:40.212 00:27:40.212 ' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.212 --rc genhtml_branch_coverage=1 00:27:40.212 --rc genhtml_function_coverage=1 00:27:40.212 --rc genhtml_legend=1 00:27:40.212 --rc geninfo_all_blocks=1 00:27:40.212 --rc geninfo_unexecuted_blocks=1 00:27:40.212 00:27:40.212 ' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.212 --rc genhtml_branch_coverage=1 00:27:40.212 --rc genhtml_function_coverage=1 00:27:40.212 --rc genhtml_legend=1 00:27:40.212 --rc geninfo_all_blocks=1 00:27:40.212 --rc geninfo_unexecuted_blocks=1 00:27:40.212 00:27:40.212 ' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.212 --rc genhtml_branch_coverage=1 00:27:40.212 --rc genhtml_function_coverage=1 00:27:40.212 --rc genhtml_legend=1 00:27:40.212 --rc geninfo_all_blocks=1 00:27:40.212 --rc geninfo_unexecuted_blocks=1 00:27:40.212 00:27:40.212 ' 00:27:40.212 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93532 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93532 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93532 ']' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.213 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:40.475 [2024-11-29 18:41:00.130898] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:40.475 [2024-11-29 18:41:00.131186] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93532 ] 00:27:40.475 [2024-11-29 18:41:00.289266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.475 [2024-11-29 18:41:00.311102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:41.420 18:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:41.420 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:41.681 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:41.681 { 00:27:41.681 "name": "basen1", 00:27:41.681 "aliases": [ 00:27:41.681 "5784a73c-a2d6-408e-8443-a56de2c6915f" 00:27:41.681 ], 00:27:41.681 "product_name": "NVMe disk", 00:27:41.681 "block_size": 4096, 00:27:41.681 "num_blocks": 1310720, 00:27:41.681 "uuid": "5784a73c-a2d6-408e-8443-a56de2c6915f", 00:27:41.681 "numa_id": -1, 00:27:41.681 "assigned_rate_limits": { 00:27:41.681 "rw_ios_per_sec": 0, 00:27:41.681 "rw_mbytes_per_sec": 0, 00:27:41.681 "r_mbytes_per_sec": 0, 00:27:41.681 "w_mbytes_per_sec": 0 00:27:41.681 }, 00:27:41.681 "claimed": true, 00:27:41.681 "claim_type": "read_many_write_one", 00:27:41.681 "zoned": false, 00:27:41.681 "supported_io_types": { 00:27:41.681 "read": true, 00:27:41.681 "write": true, 00:27:41.681 "unmap": true, 00:27:41.681 "flush": true, 00:27:41.681 "reset": true, 00:27:41.681 "nvme_admin": true, 00:27:41.681 "nvme_io": true, 00:27:41.681 "nvme_io_md": false, 00:27:41.681 "write_zeroes": true, 00:27:41.681 "zcopy": false, 00:27:41.681 "get_zone_info": false, 00:27:41.681 "zone_management": false, 00:27:41.681 "zone_append": false, 00:27:41.681 "compare": true, 00:27:41.681 "compare_and_write": false, 00:27:41.681 "abort": true, 00:27:41.681 "seek_hole": false, 00:27:41.681 "seek_data": false, 00:27:41.681 "copy": true, 00:27:41.681 "nvme_iov_md": false 00:27:41.681 }, 00:27:41.681 "driver_specific": { 00:27:41.681 "nvme": [ 00:27:41.681 { 00:27:41.681 "pci_address": "0000:00:11.0", 00:27:41.681 "trid": { 00:27:41.681 "trtype": "PCIe", 00:27:41.682 "traddr": "0000:00:11.0" 00:27:41.682 }, 00:27:41.682 "ctrlr_data": { 00:27:41.682 "cntlid": 0, 00:27:41.682 "vendor_id": "0x1b36", 00:27:41.682 "model_number": "QEMU NVMe Ctrl", 00:27:41.682 "serial_number": "12341", 00:27:41.682 "firmware_revision": "8.0.0", 00:27:41.682 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:41.682 "oacs": { 00:27:41.682 "security": 0, 00:27:41.682 "format": 1, 00:27:41.682 "firmware": 0, 00:27:41.682 "ns_manage": 1 00:27:41.682 }, 00:27:41.682 "multi_ctrlr": false, 00:27:41.682 "ana_reporting": false 00:27:41.682 }, 00:27:41.682 "vs": { 00:27:41.682 "nvme_version": "1.4" 00:27:41.682 }, 00:27:41.682 "ns_data": { 00:27:41.682 "id": 1, 00:27:41.682 "can_share": false 00:27:41.682 } 00:27:41.682 } 00:27:41.682 ], 00:27:41.682 "mp_policy": "active_passive" 00:27:41.682 } 00:27:41.682 } 00:27:41.682 ]' 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.682 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:41.943 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a62515a8-9ccc-4a82-873e-e92f8299ddb3 00:27:41.943 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:41.943 18:41:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a62515a8-9ccc-4a82-873e-e92f8299ddb3 00:27:42.204 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:42.465 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=561cb08b-1933-4e79-8dc1-7fc231005a2b 00:27:42.465 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 561cb08b-1933-4e79-8dc1-7fc231005a2b 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=e7349493-87ae-4543-96ee-e01717c187ad 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z e7349493-87ae-4543-96ee-e01717c187ad ]] 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 e7349493-87ae-4543-96ee-e01717c187ad 5120 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=e7349493-87ae-4543-96ee-e01717c187ad 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size e7349493-87ae-4543-96ee-e01717c187ad 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e7349493-87ae-4543-96ee-e01717c187ad 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:42.745 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7349493-87ae-4543-96ee-e01717c187ad 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:43.044 { 00:27:43.044 "name": "e7349493-87ae-4543-96ee-e01717c187ad", 00:27:43.044 "aliases": [ 00:27:43.044 "lvs/basen1p0" 00:27:43.044 ], 00:27:43.044 "product_name": "Logical Volume", 00:27:43.044 "block_size": 4096, 00:27:43.044 "num_blocks": 5242880, 00:27:43.044 "uuid": "e7349493-87ae-4543-96ee-e01717c187ad", 00:27:43.044 "assigned_rate_limits": { 00:27:43.044 "rw_ios_per_sec": 0, 00:27:43.044 "rw_mbytes_per_sec": 0, 00:27:43.044 "r_mbytes_per_sec": 0, 00:27:43.044 "w_mbytes_per_sec": 0 00:27:43.044 }, 00:27:43.044 "claimed": false, 00:27:43.044 "zoned": false, 00:27:43.044 "supported_io_types": { 00:27:43.044 "read": true, 00:27:43.044 "write": true, 00:27:43.044 "unmap": true, 00:27:43.044 "flush": false, 00:27:43.044 "reset": true, 00:27:43.044 "nvme_admin": false, 00:27:43.044 "nvme_io": false, 00:27:43.044 "nvme_io_md": false, 00:27:43.044 "write_zeroes": true, 00:27:43.044 "zcopy": false, 00:27:43.044 "get_zone_info": false, 00:27:43.044 "zone_management": false, 00:27:43.044 "zone_append": false, 00:27:43.044 "compare": false, 00:27:43.044 "compare_and_write": false, 00:27:43.044 "abort": false, 00:27:43.044 "seek_hole": true, 00:27:43.044 "seek_data": true, 00:27:43.044 "copy": false, 00:27:43.044 "nvme_iov_md": false 00:27:43.044 }, 00:27:43.044 "driver_specific": { 00:27:43.044 "lvol": { 00:27:43.044 "lvol_store_uuid": "561cb08b-1933-4e79-8dc1-7fc231005a2b", 00:27:43.044 "base_bdev": "basen1", 00:27:43.044 "thin_provision": true, 00:27:43.044 "num_allocated_clusters": 0, 00:27:43.044 "snapshot": false, 00:27:43.044 "clone": false, 00:27:43.044 "esnap_clone": false 00:27:43.044 } 00:27:43.044 } 00:27:43.044 } 00:27:43.044 ]' 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:43.044 18:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:43.317 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:43.317 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:43.317 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:43.579 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:43.579 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:43.579 18:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e7349493-87ae-4543-96ee-e01717c187ad -c cachen1p0 --l2p_dram_limit 2 00:27:43.579 [2024-11-29 18:41:03.419972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.420010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:43.579 [2024-11-29 18:41:03.420021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:43.579 [2024-11-29 18:41:03.420028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.420068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.420079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:43.579 [2024-11-29 18:41:03.420086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:43.579 [2024-11-29 18:41:03.420095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.420112] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:43.579 [2024-11-29 18:41:03.420309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:43.579 [2024-11-29 18:41:03.420324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.420335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:43.579 [2024-11-29 18:41:03.420341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:27:43.579 [2024-11-29 18:41:03.420349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.420373] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID eb885eaf-d12b-477b-802c-fba77e1cc9d7 00:27:43.579 [2024-11-29 18:41:03.421324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.421352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:43.579 [2024-11-29 18:41:03.421361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:43.579 [2024-11-29 18:41:03.421367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.426035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.426061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:43.579 [2024-11-29 18:41:03.426070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.613 ms 00:27:43.579 [2024-11-29 18:41:03.426076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.426111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.426117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:43.579 [2024-11-29 18:41:03.426125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:43.579 [2024-11-29 18:41:03.426131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.426170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.426177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:43.579 [2024-11-29 18:41:03.426190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:43.579 [2024-11-29 18:41:03.426196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.426214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:43.579 [2024-11-29 18:41:03.427441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.427479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:43.579 [2024-11-29 18:41:03.427486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.232 ms 00:27:43.579 [2024-11-29 18:41:03.427493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.427513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.427521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:43.579 [2024-11-29 18:41:03.427528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:43.579 [2024-11-29 18:41:03.427536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.427549] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:43.579 [2024-11-29 18:41:03.427654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:43.579 [2024-11-29 18:41:03.427663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:43.579 [2024-11-29 18:41:03.427673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:43.579 [2024-11-29 18:41:03.427686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:43.579 [2024-11-29 18:41:03.427699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:43.579 [2024-11-29 18:41:03.427707] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:43.579 [2024-11-29 18:41:03.427716] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:43.579 [2024-11-29 18:41:03.427722] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:43.579 [2024-11-29 18:41:03.427729] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:43.579 [2024-11-29 18:41:03.427734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.427741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:43.579 [2024-11-29 18:41:03.427748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:27:43.579 [2024-11-29 18:41:03.427755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.427818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.579 [2024-11-29 18:41:03.427830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:43.579 [2024-11-29 18:41:03.427836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:43.579 [2024-11-29 18:41:03.427844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.579 [2024-11-29 18:41:03.427916] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:43.579 [2024-11-29 18:41:03.427925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:43.579 [2024-11-29 18:41:03.427931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:43.579 [2024-11-29 18:41:03.427939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.579 [2024-11-29 18:41:03.427946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:43.579 [2024-11-29 18:41:03.427953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:43.579 [2024-11-29 18:41:03.427958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:43.579 [2024-11-29 18:41:03.427964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:43.579 [2024-11-29 18:41:03.427970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:43.579 [2024-11-29 18:41:03.427976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.579 [2024-11-29 18:41:03.427982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:43.579 [2024-11-29 18:41:03.427990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:43.579 [2024-11-29 18:41:03.427994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.579 [2024-11-29 18:41:03.428003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:43.579 [2024-11-29 18:41:03.428008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:43.580 [2024-11-29 18:41:03.428014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:43.580 [2024-11-29 18:41:03.428026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:43.580 [2024-11-29 18:41:03.428031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:43.580 [2024-11-29 18:41:03.428046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:43.580 [2024-11-29 18:41:03.428064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:43.580 [2024-11-29 18:41:03.428080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:43.580 [2024-11-29 18:41:03.428101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:43.580 [2024-11-29 18:41:03.428119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:43.580 [2024-11-29 18:41:03.428140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:43.580 [2024-11-29 18:41:03.428159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:43.580 [2024-11-29 18:41:03.428180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:43.580 [2024-11-29 18:41:03.428185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:43.580 [2024-11-29 18:41:03.428199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:43.580 [2024-11-29 18:41:03.428208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:43.580 [2024-11-29 18:41:03.428222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:43.580 [2024-11-29 18:41:03.428228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:43.580 [2024-11-29 18:41:03.428236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:43.580 [2024-11-29 18:41:03.428241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:43.580 [2024-11-29 18:41:03.428248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:43.580 [2024-11-29 18:41:03.428254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:43.580 [2024-11-29 18:41:03.428265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:43.580 [2024-11-29 18:41:03.428273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:43.580 [2024-11-29 18:41:03.428288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:43.580 [2024-11-29 18:41:03.428310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:43.580 [2024-11-29 18:41:03.428316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:43.580 [2024-11-29 18:41:03.428326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:43.580 [2024-11-29 18:41:03.428333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:43.580 [2024-11-29 18:41:03.428390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:43.580 [2024-11-29 18:41:03.428397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:43.580 [2024-11-29 18:41:03.428411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:43.580 [2024-11-29 18:41:03.428418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:43.580 [2024-11-29 18:41:03.428425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:43.580 [2024-11-29 18:41:03.428433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.580 [2024-11-29 18:41:03.428441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:43.580 [2024-11-29 18:41:03.428449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:27:43.580 [2024-11-29 18:41:03.428473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.580 [2024-11-29 18:41:03.428503] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:43.580 [2024-11-29 18:41:03.428511] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:47.802 [2024-11-29 18:41:07.084953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.085039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:47.802 [2024-11-29 18:41:07.085059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3656.427 ms 00:27:47.802 [2024-11-29 18:41:07.085071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.099047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.099101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:47.802 [2024-11-29 18:41:07.099118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.873 ms 00:27:47.802 [2024-11-29 18:41:07.099132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.099244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.099255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:47.802 [2024-11-29 18:41:07.099268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:47.802 [2024-11-29 18:41:07.099283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.112256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.112308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.802 [2024-11-29 18:41:07.112323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.924 ms 00:27:47.802 [2024-11-29 18:41:07.112335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.112371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.112380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.802 [2024-11-29 18:41:07.112391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:47.802 [2024-11-29 18:41:07.112399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.113014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.113058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.802 [2024-11-29 18:41:07.113074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:27:47.802 [2024-11-29 18:41:07.113088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.113145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.113165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.802 [2024-11-29 18:41:07.113178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:47.802 [2024-11-29 18:41:07.113190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.122215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.122410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:47.802 [2024-11-29 18:41:07.122512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.996 ms 00:27:47.802 [2024-11-29 18:41:07.122539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.145739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:47.802 [2024-11-29 18:41:07.147275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.147329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:47.802 [2024-11-29 18:41:07.147342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.623 ms 00:27:47.802 [2024-11-29 18:41:07.147353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.168988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.169056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:47.802 [2024-11-29 18:41:07.169070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.591 ms 00:27:47.802 [2024-11-29 18:41:07.169085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.169190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.169204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:47.802 [2024-11-29 18:41:07.169213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:47.802 [2024-11-29 18:41:07.169223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.174319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.174374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:47.802 [2024-11-29 18:41:07.174389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.075 ms 00:27:47.802 [2024-11-29 18:41:07.174401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.179529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.179578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:47.802 [2024-11-29 18:41:07.179589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.079 ms 00:27:47.802 [2024-11-29 18:41:07.179599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.179949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.179973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:47.802 [2024-11-29 18:41:07.179984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:27:47.802 [2024-11-29 18:41:07.180000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.229498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.229555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:47.802 [2024-11-29 18:41:07.229572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.475 ms 00:27:47.802 [2024-11-29 18:41:07.229583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.236597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.236816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:47.802 [2024-11-29 18:41:07.236837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.936 ms 00:27:47.802 [2024-11-29 18:41:07.236849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.802 [2024-11-29 18:41:07.242925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.802 [2024-11-29 18:41:07.242984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:47.802 [2024-11-29 18:41:07.242994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.998 ms 00:27:47.803 [2024-11-29 18:41:07.243004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.803 [2024-11-29 18:41:07.249430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.803 [2024-11-29 18:41:07.249508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:47.803 [2024-11-29 18:41:07.249520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.377 ms 00:27:47.803 [2024-11-29 18:41:07.249533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.803 [2024-11-29 18:41:07.249586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.803 [2024-11-29 18:41:07.249600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:47.803 [2024-11-29 18:41:07.249610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:47.803 [2024-11-29 18:41:07.249620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.803 [2024-11-29 18:41:07.249711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.803 [2024-11-29 18:41:07.249725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:47.803 [2024-11-29 18:41:07.249740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:47.803 [2024-11-29 18:41:07.249754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.803 [2024-11-29 18:41:07.250956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3830.465 ms, result 0 00:27:47.803 { 00:27:47.803 "name": "ftl", 00:27:47.803 "uuid": "eb885eaf-d12b-477b-802c-fba77e1cc9d7" 00:27:47.803 } 00:27:47.803 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:47.803 [2024-11-29 18:41:07.474081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.803 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:47.803 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:48.063 [2024-11-29 18:41:07.894519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:48.063 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:48.323 [2024-11-29 18:41:08.110908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:48.323 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:48.584 Fill FTL, iteration 1 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93656 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93656 /var/tmp/spdk.tgt.sock 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93656 ']' 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:48.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.584 18:41:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.860 [2024-11-29 18:41:08.566445] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:48.861 [2024-11-29 18:41:08.566854] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93656 ] 00:27:48.861 [2024-11-29 18:41:08.725978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.861 [2024-11-29 18:41:08.755090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.800 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.800 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:49.800 18:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:49.801 ftln1 00:27:49.801 18:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:49.801 18:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93656 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93656 ']' 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93656 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93656 00:27:50.060 killing process with pid 93656 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93656' 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93656 00:27:50.060 18:41:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93656 00:27:50.630 18:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:50.630 18:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:50.630 [2024-11-29 18:41:10.400492] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:50.630 [2024-11-29 18:41:10.400814] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93691 ] 00:27:50.891 [2024-11-29 18:41:10.563687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.891 [2024-11-29 18:41:10.585757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.277  [2024-11-29T18:41:13.126Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-29T18:41:14.072Z] Copying: 461/1024 [MB] (238 MBps) [2024-11-29T18:41:15.016Z] Copying: 638/1024 [MB] (177 MBps) [2024-11-29T18:41:15.960Z] Copying: 810/1024 [MB] (172 MBps) [2024-11-29T18:41:16.218Z] Copying: 988/1024 [MB] (178 MBps) [2024-11-29T18:41:16.218Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:27:56.313 00:27:56.313 Calculate MD5 checksum, iteration 1 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.313 18:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.313 [2024-11-29 18:41:16.211217] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:56.313 [2024-11-29 18:41:16.211338] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93752 ] 00:27:56.572 [2024-11-29 18:41:16.368652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.572 [2024-11-29 18:41:16.388038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.957  [2024-11-29T18:41:18.806Z] Copying: 502/1024 [MB] (502 MBps) [2024-11-29T18:41:18.806Z] Copying: 1024/1024 [MB] (average 521 MBps) 00:27:58.901 00:27:58.901 18:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:58.901 18:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:01.439 Fill FTL, iteration 2 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e7ca00eeffea492912e7ebf945bc2a95 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:01.439 18:41:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:01.439 [2024-11-29 18:41:20.931315] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:01.439 [2024-11-29 18:41:20.931426] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93807 ] 00:28:01.439 [2024-11-29 18:41:21.083709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.439 [2024-11-29 18:41:21.100683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.382  [2024-11-29T18:41:23.675Z] Copying: 249/1024 [MB] (249 MBps) [2024-11-29T18:41:24.619Z] Copying: 497/1024 [MB] (248 MBps) [2024-11-29T18:41:25.561Z] Copying: 741/1024 [MB] (244 MBps) [2024-11-29T18:41:25.561Z] Copying: 1004/1024 [MB] (263 MBps) [2024-11-29T18:41:25.561Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:28:05.656 00:28:05.656 Calculate MD5 checksum, iteration 2 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.656 18:41:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.930 [2024-11-29 18:41:25.577940] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:05.930 [2024-11-29 18:41:25.578072] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93854 ] 00:28:05.930 [2024-11-29 18:41:25.731251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.930 [2024-11-29 18:41:25.755100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.317  [2024-11-29T18:41:27.789Z] Copying: 634/1024 [MB] (634 MBps) [2024-11-29T18:41:28.359Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:28:08.454 00:28:08.454 18:41:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:08.454 18:41:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.353 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:10.353 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4be4f97d1d37d7584d756bc7ef9510aa 00:28:10.353 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:10.353 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:10.353 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:10.612 [2024-11-29 18:41:30.363639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.612 [2024-11-29 18:41:30.363695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:10.612 [2024-11-29 18:41:30.363708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:10.612 [2024-11-29 18:41:30.363721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.612 [2024-11-29 18:41:30.363741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.612 [2024-11-29 18:41:30.363749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:10.612 [2024-11-29 18:41:30.363755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:10.612 [2024-11-29 18:41:30.363762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.612 [2024-11-29 18:41:30.363778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.612 [2024-11-29 18:41:30.363784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:10.612 [2024-11-29 18:41:30.363795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:10.612 [2024-11-29 18:41:30.363805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.612 [2024-11-29 18:41:30.363859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.208 ms, result 0 00:28:10.612 true 00:28:10.612 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:10.871 { 00:28:10.871 "name": "ftl", 00:28:10.871 "properties": [ 00:28:10.871 { 00:28:10.871 "name": "superblock_version", 00:28:10.871 "value": 5, 00:28:10.871 "read-only": true 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "name": "base_device", 00:28:10.871 "bands": [ 00:28:10.871 { 00:28:10.871 "id": 0, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 1, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 2, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 3, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 4, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 5, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 6, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 7, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 8, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 9, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 10, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 11, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 12, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 13, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 14, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 15, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 16, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 17, 00:28:10.871 "state": "FREE", 00:28:10.871 "validity": 0.0 00:28:10.871 } 00:28:10.871 ], 00:28:10.871 "read-only": true 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "name": "cache_device", 00:28:10.871 "type": "bdev", 00:28:10.871 "chunks": [ 00:28:10.871 { 00:28:10.871 "id": 0, 00:28:10.871 "state": "INACTIVE", 00:28:10.871 "utilization": 0.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 1, 00:28:10.871 "state": "CLOSED", 00:28:10.871 "utilization": 1.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 2, 00:28:10.871 "state": "CLOSED", 00:28:10.871 "utilization": 1.0 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 3, 00:28:10.871 "state": "OPEN", 00:28:10.871 "utilization": 0.001953125 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "id": 4, 00:28:10.871 "state": "OPEN", 00:28:10.871 "utilization": 0.0 00:28:10.871 } 00:28:10.871 ], 00:28:10.871 "read-only": true 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "name": "verbose_mode", 00:28:10.871 "value": true, 00:28:10.871 "unit": "", 00:28:10.871 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:10.871 }, 00:28:10.871 { 00:28:10.871 "name": "prep_upgrade_on_shutdown", 00:28:10.871 "value": false, 00:28:10.871 "unit": "", 00:28:10.871 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:10.871 } 00:28:10.871 ] 00:28:10.871 } 00:28:10.871 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:10.871 [2024-11-29 18:41:30.667825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.871 [2024-11-29 18:41:30.667960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:10.871 [2024-11-29 18:41:30.667973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:10.871 [2024-11-29 18:41:30.667979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.871 [2024-11-29 18:41:30.667999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.871 [2024-11-29 18:41:30.668006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:10.871 [2024-11-29 18:41:30.668011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:10.871 [2024-11-29 18:41:30.668017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.871 [2024-11-29 18:41:30.668031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.871 [2024-11-29 18:41:30.668038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:10.871 [2024-11-29 18:41:30.668044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:10.871 [2024-11-29 18:41:30.668049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.871 [2024-11-29 18:41:30.668091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:28:10.871 true 00:28:10.871 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:10.871 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:10.872 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:11.129 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:11.129 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:11.129 18:41:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:11.387 [2024-11-29 18:41:31.064132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.387 [2024-11-29 18:41:31.064162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:11.388 [2024-11-29 18:41:31.064170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:11.388 [2024-11-29 18:41:31.064175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.388 [2024-11-29 18:41:31.064192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.388 [2024-11-29 18:41:31.064198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:11.388 [2024-11-29 18:41:31.064204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:11.388 [2024-11-29 18:41:31.064209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.388 [2024-11-29 18:41:31.064223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.388 [2024-11-29 18:41:31.064230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:11.388 [2024-11-29 18:41:31.064236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:11.388 [2024-11-29 18:41:31.064241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.388 [2024-11-29 18:41:31.064280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.136 ms, result 0 00:28:11.388 true 00:28:11.388 18:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:11.388 { 00:28:11.388 "name": "ftl", 00:28:11.388 "properties": [ 00:28:11.388 { 00:28:11.388 "name": "superblock_version", 00:28:11.388 "value": 5, 00:28:11.388 "read-only": true 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "name": "base_device", 00:28:11.388 "bands": [ 00:28:11.388 { 00:28:11.388 "id": 0, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 1, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 2, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 3, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 4, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 5, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 6, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 7, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 8, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 9, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 10, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 11, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 12, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 13, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 14, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 15, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 16, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 17, 00:28:11.388 "state": "FREE", 00:28:11.388 "validity": 0.0 00:28:11.388 } 00:28:11.388 ], 00:28:11.388 "read-only": true 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "name": "cache_device", 00:28:11.388 "type": "bdev", 00:28:11.388 "chunks": [ 00:28:11.388 { 00:28:11.388 "id": 0, 00:28:11.388 "state": "INACTIVE", 00:28:11.388 "utilization": 0.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 1, 00:28:11.388 "state": "CLOSED", 00:28:11.388 "utilization": 1.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 2, 00:28:11.388 "state": "CLOSED", 00:28:11.388 "utilization": 1.0 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 3, 00:28:11.388 "state": "OPEN", 00:28:11.388 "utilization": 0.001953125 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "id": 4, 00:28:11.388 "state": "OPEN", 00:28:11.388 "utilization": 0.0 00:28:11.388 } 00:28:11.388 ], 00:28:11.388 "read-only": true 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "name": "verbose_mode", 00:28:11.388 "value": true, 00:28:11.388 "unit": "", 00:28:11.388 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:11.388 }, 00:28:11.388 { 00:28:11.388 "name": "prep_upgrade_on_shutdown", 00:28:11.388 "value": true, 00:28:11.388 "unit": "", 00:28:11.388 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:11.388 } 00:28:11.388 ] 00:28:11.388 } 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93532 ]] 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93532 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93532 ']' 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93532 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93532 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93532' 00:28:11.648 killing process with pid 93532 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93532 00:28:11.648 18:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93532 00:28:11.648 [2024-11-29 18:41:31.416604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:11.648 [2024-11-29 18:41:31.420864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.648 [2024-11-29 18:41:31.420990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:11.648 [2024-11-29 18:41:31.421008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:11.648 [2024-11-29 18:41:31.421017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.648 [2024-11-29 18:41:31.421042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:11.648 [2024-11-29 18:41:31.421497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.648 [2024-11-29 18:41:31.421517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:11.648 [2024-11-29 18:41:31.421526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:28:11.648 [2024-11-29 18:41:31.421534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.891407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.891472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:19.871 [2024-11-29 18:41:38.891486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7469.826 ms 00:28:19.871 [2024-11-29 18:41:38.891493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.892540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.892649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:19.871 [2024-11-29 18:41:38.892661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.033 ms 00:28:19.871 [2024-11-29 18:41:38.892667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.893535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.893555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:19.871 [2024-11-29 18:41:38.893564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:28:19.871 [2024-11-29 18:41:38.893575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.895241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.895270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:19.871 [2024-11-29 18:41:38.895277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.636 ms 00:28:19.871 [2024-11-29 18:41:38.895283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.897204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.897233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:19.871 [2024-11-29 18:41:38.897241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.898 ms 00:28:19.871 [2024-11-29 18:41:38.897251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.897316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.897324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:19.871 [2024-11-29 18:41:38.897337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:19.871 [2024-11-29 18:41:38.897343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.898554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.898580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:19.871 [2024-11-29 18:41:38.898587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:28:19.871 [2024-11-29 18:41:38.898592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.899535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.899560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:19.871 [2024-11-29 18:41:38.899568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.919 ms 00:28:19.871 [2024-11-29 18:41:38.899573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.900692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.900718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:19.871 [2024-11-29 18:41:38.900725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.096 ms 00:28:19.871 [2024-11-29 18:41:38.900730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.901827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.871 [2024-11-29 18:41:38.901853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:19.871 [2024-11-29 18:41:38.901860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.051 ms 00:28:19.871 [2024-11-29 18:41:38.901865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.871 [2024-11-29 18:41:38.901888] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:19.871 [2024-11-29 18:41:38.901898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.871 [2024-11-29 18:41:38.901906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.871 [2024-11-29 18:41:38.901912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:19.871 [2024-11-29 18:41:38.901918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.901998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.872 [2024-11-29 18:41:38.902005] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:19.872 [2024-11-29 18:41:38.902020] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: eb885eaf-d12b-477b-802c-fba77e1cc9d7 00:28:19.872 [2024-11-29 18:41:38.902025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:19.872 [2024-11-29 18:41:38.902035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:19.872 [2024-11-29 18:41:38.902041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:19.872 [2024-11-29 18:41:38.902046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:19.872 [2024-11-29 18:41:38.902052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:19.872 [2024-11-29 18:41:38.902057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:19.872 [2024-11-29 18:41:38.902062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:19.872 [2024-11-29 18:41:38.902067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:19.872 [2024-11-29 18:41:38.902072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:19.872 [2024-11-29 18:41:38.902079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.872 [2024-11-29 18:41:38.902085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:19.872 [2024-11-29 18:41:38.902091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:19.872 [2024-11-29 18:41:38.902097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.903445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.872 [2024-11-29 18:41:38.903539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:19.872 [2024-11-29 18:41:38.903583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.335 ms 00:28:19.872 [2024-11-29 18:41:38.903601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.903679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.872 [2024-11-29 18:41:38.903719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:19.872 [2024-11-29 18:41:38.903770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:19.872 [2024-11-29 18:41:38.903784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.908229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.908321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.872 [2024-11-29 18:41:38.908361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.908377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.908407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.908469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.872 [2024-11-29 18:41:38.908488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.908502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.908584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.908640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.872 [2024-11-29 18:41:38.908699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.908717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.908740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.908760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.872 [2024-11-29 18:41:38.908809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.908826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.916915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.916953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.872 [2024-11-29 18:41:38.916962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.916967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.872 [2024-11-29 18:41:38.923423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:19.872 [2024-11-29 18:41:38.923532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:19.872 [2024-11-29 18:41:38.923591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:19.872 [2024-11-29 18:41:38.923666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:19.872 [2024-11-29 18:41:38.923706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:19.872 [2024-11-29 18:41:38.923755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.872 [2024-11-29 18:41:38.923800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:19.872 [2024-11-29 18:41:38.923806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.872 [2024-11-29 18:41:38.923812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.872 [2024-11-29 18:41:38.923906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7503.006 ms, result 0 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94027 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94027 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94027 ']' 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.172 18:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:23.431 [2024-11-29 18:41:43.102356] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:23.431 [2024-11-29 18:41:43.102641] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94027 ] 00:28:23.431 [2024-11-29 18:41:43.261217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.431 [2024-11-29 18:41:43.287346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.690 [2024-11-29 18:41:43.593441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:23.690 [2024-11-29 18:41:43.593528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:23.950 [2024-11-29 18:41:43.742163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.742233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:23.950 [2024-11-29 18:41:43.742249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:23.950 [2024-11-29 18:41:43.742258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.742327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.742340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:23.950 [2024-11-29 18:41:43.742353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:23.950 [2024-11-29 18:41:43.742361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.742388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:23.950 [2024-11-29 18:41:43.742715] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:23.950 [2024-11-29 18:41:43.742735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.742743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:23.950 [2024-11-29 18:41:43.742753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:28:23.950 [2024-11-29 18:41:43.742761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.744543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:23.950 [2024-11-29 18:41:43.749060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.749127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:23.950 [2024-11-29 18:41:43.749139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.519 ms 00:28:23.950 [2024-11-29 18:41:43.749153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.749239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.749253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:23.950 [2024-11-29 18:41:43.749262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:23.950 [2024-11-29 18:41:43.749270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.757837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.757887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:23.950 [2024-11-29 18:41:43.757897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.516 ms 00:28:23.950 [2024-11-29 18:41:43.757911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.757971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.757980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:23.950 [2024-11-29 18:41:43.757989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:23.950 [2024-11-29 18:41:43.757997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.758100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.758112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:23.950 [2024-11-29 18:41:43.758120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:23.950 [2024-11-29 18:41:43.758131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.758159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:23.950 [2024-11-29 18:41:43.760186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.760373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:23.950 [2024-11-29 18:41:43.760391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.033 ms 00:28:23.950 [2024-11-29 18:41:43.760398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.760437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.760445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:23.950 [2024-11-29 18:41:43.760476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:23.950 [2024-11-29 18:41:43.760484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.760509] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:23.950 [2024-11-29 18:41:43.760531] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:23.950 [2024-11-29 18:41:43.760569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:23.950 [2024-11-29 18:41:43.760587] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:23.950 [2024-11-29 18:41:43.760699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:23.950 [2024-11-29 18:41:43.760710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:23.950 [2024-11-29 18:41:43.760722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:23.950 [2024-11-29 18:41:43.760732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:23.950 [2024-11-29 18:41:43.760742] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:23.950 [2024-11-29 18:41:43.760751] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:23.950 [2024-11-29 18:41:43.760759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:23.950 [2024-11-29 18:41:43.760766] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:23.950 [2024-11-29 18:41:43.760774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:23.950 [2024-11-29 18:41:43.760782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.760793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:23.950 [2024-11-29 18:41:43.760800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:28:23.950 [2024-11-29 18:41:43.760808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.760898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.760908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:23.950 [2024-11-29 18:41:43.760916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:23.950 [2024-11-29 18:41:43.760927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.761035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:23.950 [2024-11-29 18:41:43.761047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:23.950 [2024-11-29 18:41:43.761059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:23.950 [2024-11-29 18:41:43.761084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:23.950 [2024-11-29 18:41:43.761100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:23.950 [2024-11-29 18:41:43.761109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:23.950 [2024-11-29 18:41:43.761117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:23.950 [2024-11-29 18:41:43.761132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:23.950 [2024-11-29 18:41:43.761139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:23.950 [2024-11-29 18:41:43.761162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:23.950 [2024-11-29 18:41:43.761176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:23.950 [2024-11-29 18:41:43.761191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:23.950 [2024-11-29 18:41:43.761199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:23.950 [2024-11-29 18:41:43.761215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:23.950 [2024-11-29 18:41:43.761238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:23.950 [2024-11-29 18:41:43.761260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:23.950 [2024-11-29 18:41:43.761282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:23.950 [2024-11-29 18:41:43.761305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:23.950 [2024-11-29 18:41:43.761325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:23.950 [2024-11-29 18:41:43.761345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:23.950 [2024-11-29 18:41:43.761364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:23.950 [2024-11-29 18:41:43.761370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761377] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:23.950 [2024-11-29 18:41:43.761390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:23.950 [2024-11-29 18:41:43.761401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.950 [2024-11-29 18:41:43.761422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:23.950 [2024-11-29 18:41:43.761429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:23.950 [2024-11-29 18:41:43.761436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:23.950 [2024-11-29 18:41:43.761443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:23.950 [2024-11-29 18:41:43.761449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:23.950 [2024-11-29 18:41:43.761471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:23.950 [2024-11-29 18:41:43.761479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:23.950 [2024-11-29 18:41:43.761489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:23.950 [2024-11-29 18:41:43.761506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:23.950 [2024-11-29 18:41:43.761527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:23.950 [2024-11-29 18:41:43.761534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:23.950 [2024-11-29 18:41:43.761542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:23.950 [2024-11-29 18:41:43.761549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:23.950 [2024-11-29 18:41:43.761602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:23.950 [2024-11-29 18:41:43.761611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:23.950 [2024-11-29 18:41:43.761629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:23.950 [2024-11-29 18:41:43.761637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:23.950 [2024-11-29 18:41:43.761644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:23.950 [2024-11-29 18:41:43.761652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.950 [2024-11-29 18:41:43.761659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:23.950 [2024-11-29 18:41:43.761667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.685 ms 00:28:23.950 [2024-11-29 18:41:43.761674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.950 [2024-11-29 18:41:43.761722] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:23.950 [2024-11-29 18:41:43.761733] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:28.149 [2024-11-29 18:41:47.616841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.617185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:28.149 [2024-11-29 18:41:47.617213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3855.106 ms 00:28:28.149 [2024-11-29 18:41:47.617231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.630925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.630977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:28.149 [2024-11-29 18:41:47.630991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.586 ms 00:28:28.149 [2024-11-29 18:41:47.630999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.631053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.631063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:28.149 [2024-11-29 18:41:47.631083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:28.149 [2024-11-29 18:41:47.631094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.644003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.644216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:28.149 [2024-11-29 18:41:47.644235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.856 ms 00:28:28.149 [2024-11-29 18:41:47.644246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.644291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.644302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:28.149 [2024-11-29 18:41:47.644317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:28.149 [2024-11-29 18:41:47.644326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.644954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.644978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:28.149 [2024-11-29 18:41:47.644991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:28:28.149 [2024-11-29 18:41:47.645000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.645065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.149 [2024-11-29 18:41:47.645076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:28.149 [2024-11-29 18:41:47.645087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:28.149 [2024-11-29 18:41:47.645099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.149 [2024-11-29 18:41:47.654280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.654328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:28.150 [2024-11-29 18:41:47.654339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.157 ms 00:28:28.150 [2024-11-29 18:41:47.654348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.668448] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:28.150 [2024-11-29 18:41:47.668564] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:28.150 [2024-11-29 18:41:47.668599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.668617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:28.150 [2024-11-29 18:41:47.668637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.115 ms 00:28:28.150 [2024-11-29 18:41:47.668653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.676605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.676674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:28.150 [2024-11-29 18:41:47.676709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.900 ms 00:28:28.150 [2024-11-29 18:41:47.676725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.679216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.679263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:28.150 [2024-11-29 18:41:47.679274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.409 ms 00:28:28.150 [2024-11-29 18:41:47.679282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.681847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.681892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:28.150 [2024-11-29 18:41:47.681902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.538 ms 00:28:28.150 [2024-11-29 18:41:47.681910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.682255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.682267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:28.150 [2024-11-29 18:41:47.682277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:28:28.150 [2024-11-29 18:41:47.682290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.707321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.707383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:28.150 [2024-11-29 18:41:47.707398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.009 ms 00:28:28.150 [2024-11-29 18:41:47.707406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.715825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:28.150 [2024-11-29 18:41:47.716779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.716828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:28.150 [2024-11-29 18:41:47.716840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.294 ms 00:28:28.150 [2024-11-29 18:41:47.716848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.716936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.716948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:28.150 [2024-11-29 18:41:47.716958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:28.150 [2024-11-29 18:41:47.716966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.717015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.717029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:28.150 [2024-11-29 18:41:47.717038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:28.150 [2024-11-29 18:41:47.717046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.717074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.717088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:28.150 [2024-11-29 18:41:47.717097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:28.150 [2024-11-29 18:41:47.717113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.717151] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:28.150 [2024-11-29 18:41:47.717162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.717175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:28.150 [2024-11-29 18:41:47.717186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:28.150 [2024-11-29 18:41:47.717195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.722600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.722805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:28.150 [2024-11-29 18:41:47.722827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.382 ms 00:28:28.150 [2024-11-29 18:41:47.722837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.723215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.723252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:28.150 [2024-11-29 18:41:47.723266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:28:28.150 [2024-11-29 18:41:47.723279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.724523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3981.879 ms, result 0 00:28:28.150 [2024-11-29 18:41:47.737298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.150 [2024-11-29 18:41:47.753285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:28.150 [2024-11-29 18:41:47.761410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:28.150 [2024-11-29 18:41:47.953399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.953541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:28.150 [2024-11-29 18:41:47.953599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:28.150 [2024-11-29 18:41:47.953628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.953670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.953692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:28.150 [2024-11-29 18:41:47.953715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:28.150 [2024-11-29 18:41:47.953733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.953764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.150 [2024-11-29 18:41:47.953784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:28.150 [2024-11-29 18:41:47.953804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:28.150 [2024-11-29 18:41:47.953848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.150 [2024-11-29 18:41:47.953920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.505 ms, result 0 00:28:28.150 true 00:28:28.150 18:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:28.411 { 00:28:28.411 "name": "ftl", 00:28:28.411 "properties": [ 00:28:28.411 { 00:28:28.411 "name": "superblock_version", 00:28:28.411 "value": 5, 00:28:28.411 "read-only": true 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "name": "base_device", 00:28:28.411 "bands": [ 00:28:28.411 { 00:28:28.411 "id": 0, 00:28:28.411 "state": "CLOSED", 00:28:28.411 "validity": 1.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 1, 00:28:28.411 "state": "CLOSED", 00:28:28.411 "validity": 1.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 2, 00:28:28.411 "state": "CLOSED", 00:28:28.411 "validity": 0.007843137254901933 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 3, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 4, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 5, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 6, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 7, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 8, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 9, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 10, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 11, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 12, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 13, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 14, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 15, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 16, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 17, 00:28:28.411 "state": "FREE", 00:28:28.411 "validity": 0.0 00:28:28.411 } 00:28:28.411 ], 00:28:28.411 "read-only": true 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "name": "cache_device", 00:28:28.411 "type": "bdev", 00:28:28.411 "chunks": [ 00:28:28.411 { 00:28:28.411 "id": 0, 00:28:28.411 "state": "INACTIVE", 00:28:28.411 "utilization": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 1, 00:28:28.411 "state": "OPEN", 00:28:28.411 "utilization": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 2, 00:28:28.411 "state": "OPEN", 00:28:28.411 "utilization": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 3, 00:28:28.411 "state": "FREE", 00:28:28.411 "utilization": 0.0 00:28:28.411 }, 00:28:28.411 { 00:28:28.411 "id": 4, 00:28:28.411 "state": "FREE", 00:28:28.411 "utilization": 0.0 00:28:28.411 } 00:28:28.411 ], 00:28:28.412 "read-only": true 00:28:28.412 }, 00:28:28.412 { 00:28:28.412 "name": "verbose_mode", 00:28:28.412 "value": true, 00:28:28.412 "unit": "", 00:28:28.412 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:28.412 }, 00:28:28.412 { 00:28:28.412 "name": "prep_upgrade_on_shutdown", 00:28:28.412 "value": false, 00:28:28.412 "unit": "", 00:28:28.412 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:28.412 } 00:28:28.412 ] 00:28:28.412 } 00:28:28.412 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:28.412 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:28.412 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:28.673 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:28.673 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:28.673 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:28.673 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:28.673 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:28.934 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:28.934 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:28.934 Validate MD5 checksum, iteration 1 00:28:28.934 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:28.935 18:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:28.935 [2024-11-29 18:41:48.657373] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:28.935 [2024-11-29 18:41:48.657722] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94096 ] 00:28:28.935 [2024-11-29 18:41:48.817491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.195 [2024-11-29 18:41:48.847745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.580  [2024-11-29T18:41:51.426Z] Copying: 512/1024 [MB] (512 MBps) [2024-11-29T18:41:51.426Z] Copying: 1011/1024 [MB] (499 MBps) [2024-11-29T18:41:51.997Z] Copying: 1024/1024 [MB] (average 505 MBps) 00:28:32.092 00:28:32.092 18:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:32.092 18:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:34.626 Validate MD5 checksum, iteration 2 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e7ca00eeffea492912e7ebf945bc2a95 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e7ca00eeffea492912e7ebf945bc2a95 != \e\7\c\a\0\0\e\e\f\f\e\a\4\9\2\9\1\2\e\7\e\b\f\9\4\5\b\c\2\a\9\5 ]] 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:34.626 18:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:34.626 [2024-11-29 18:41:54.211911] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:34.626 [2024-11-29 18:41:54.212592] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94164 ] 00:28:34.626 [2024-11-29 18:41:54.370548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.626 [2024-11-29 18:41:54.389936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.004  [2024-11-29T18:41:56.852Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-29T18:41:57.111Z] Copying: 1024/1024 [MB] (average 550 MBps) 00:28:37.206 00:28:37.207 18:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:37.207 18:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4be4f97d1d37d7584d756bc7ef9510aa 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4be4f97d1d37d7584d756bc7ef9510aa != \4\b\e\4\f\9\7\d\1\d\3\7\d\7\5\8\4\d\7\5\6\b\c\7\e\f\9\5\1\0\a\a ]] 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94027 ]] 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94027 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:39.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94214 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94214 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94214 ']' 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.737 18:41:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.737 [2024-11-29 18:41:59.260822] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:39.737 [2024-11-29 18:41:59.260953] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94214 ] 00:28:39.737 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94027 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:39.737 [2024-11-29 18:41:59.416464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.737 [2024-11-29 18:41:59.448026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.997 [2024-11-29 18:41:59.743421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:39.997 [2024-11-29 18:41:59.743491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:39.997 [2024-11-29 18:41:59.889348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.889508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:39.997 [2024-11-29 18:41:59.889526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:39.997 [2024-11-29 18:41:59.889533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.889588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.889598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:39.997 [2024-11-29 18:41:59.889605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:39.997 [2024-11-29 18:41:59.889611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.889628] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:39.997 [2024-11-29 18:41:59.889819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:39.997 [2024-11-29 18:41:59.889831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.889837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:39.997 [2024-11-29 18:41:59.889844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:28:39.997 [2024-11-29 18:41:59.889851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.890077] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:39.997 [2024-11-29 18:41:59.894036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.894068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:39.997 [2024-11-29 18:41:59.894080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.959 ms 00:28:39.997 [2024-11-29 18:41:59.894087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.895106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.895131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:39.997 [2024-11-29 18:41:59.895139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:39.997 [2024-11-29 18:41:59.895146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.895370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.895379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:39.997 [2024-11-29 18:41:59.895386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:28:39.997 [2024-11-29 18:41:59.895395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.895423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.895431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:39.997 [2024-11-29 18:41:59.895437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:39.997 [2024-11-29 18:41:59.895443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.895472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.895487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:39.997 [2024-11-29 18:41:59.895495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:39.997 [2024-11-29 18:41:59.895501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.895517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:39.997 [2024-11-29 18:41:59.896318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.896330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:39.997 [2024-11-29 18:41:59.896337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.805 ms 00:28:39.997 [2024-11-29 18:41:59.896343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.896373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.997 [2024-11-29 18:41:59.896382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:39.997 [2024-11-29 18:41:59.896389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:39.997 [2024-11-29 18:41:59.896394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.997 [2024-11-29 18:41:59.896411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:39.997 [2024-11-29 18:41:59.896428] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:39.997 [2024-11-29 18:41:59.896471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:39.997 [2024-11-29 18:41:59.896488] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:39.997 [2024-11-29 18:41:59.896570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:39.997 [2024-11-29 18:41:59.896580] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:39.998 [2024-11-29 18:41:59.896588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:39.998 [2024-11-29 18:41:59.896596] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:39.998 [2024-11-29 18:41:59.896615] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:39.998 [2024-11-29 18:41:59.896621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:39.998 [2024-11-29 18:41:59.896628] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:39.998 [2024-11-29 18:41:59.896633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.998 [2024-11-29 18:41:59.896641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:39.998 [2024-11-29 18:41:59.896647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:28:39.998 [2024-11-29 18:41:59.896653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.998 [2024-11-29 18:41:59.896718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.998 [2024-11-29 18:41:59.896725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:39.998 [2024-11-29 18:41:59.896732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:39.998 [2024-11-29 18:41:59.896738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.998 [2024-11-29 18:41:59.896815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:39.998 [2024-11-29 18:41:59.896823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:39.998 [2024-11-29 18:41:59.896832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:39.998 [2024-11-29 18:41:59.896850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:39.998 [2024-11-29 18:41:59.896860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:39.998 [2024-11-29 18:41:59.896866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:39.998 [2024-11-29 18:41:59.896872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:39.998 [2024-11-29 18:41:59.896888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:39.998 [2024-11-29 18:41:59.896894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:39.998 [2024-11-29 18:41:59.896908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:39.998 [2024-11-29 18:41:59.896914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:39.998 [2024-11-29 18:41:59.896924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:39.998 [2024-11-29 18:41:59.896929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.896934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:39.998 [2024-11-29 18:41:59.896939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:39.998 [2024-11-29 18:41:59.896944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:39.998 [2024-11-29 18:41:59.896954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:39.998 [2024-11-29 18:41:59.896959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:39.998 [2024-11-29 18:41:59.896969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:39.998 [2024-11-29 18:41:59.896975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.998 [2024-11-29 18:41:59.896981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:39.998 [2024-11-29 18:41:59.896987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:39.998 [2024-11-29 18:41:59.896994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.998 [2024-11-29 18:41:59.897000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:39.998 [2024-11-29 18:41:59.897006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:39.998 [2024-11-29 18:41:59.897012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:39.998 [2024-11-29 18:41:59.897023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:39.998 [2024-11-29 18:41:59.897029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:39.998 [2024-11-29 18:41:59.897040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:39.998 [2024-11-29 18:41:59.897057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:39.998 [2024-11-29 18:41:59.897065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:39.998 [2024-11-29 18:41:59.897077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:39.998 [2024-11-29 18:41:59.897083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:39.998 [2024-11-29 18:41:59.897092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.998 [2024-11-29 18:41:59.897102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:39.998 [2024-11-29 18:41:59.897108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:39.998 [2024-11-29 18:41:59.897114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:39.998 [2024-11-29 18:41:59.897120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:39.998 [2024-11-29 18:41:59.897126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:39.998 [2024-11-29 18:41:59.897131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:39.998 [2024-11-29 18:41:59.897139] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:39.998 [2024-11-29 18:41:59.897147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:39.998 [2024-11-29 18:41:59.897160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:39.998 [2024-11-29 18:41:59.897179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:39.998 [2024-11-29 18:41:59.897185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:39.998 [2024-11-29 18:41:59.897191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:39.998 [2024-11-29 18:41:59.897199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:39.998 [2024-11-29 18:41:59.897245] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:39.998 [2024-11-29 18:41:59.897251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.998 [2024-11-29 18:41:59.897266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:39.998 [2024-11-29 18:41:59.897273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:39.998 [2024-11-29 18:41:59.897279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:39.998 [2024-11-29 18:41:59.897286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.998 [2024-11-29 18:41:59.897294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:39.998 [2024-11-29 18:41:59.897301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:28:39.998 [2024-11-29 18:41:59.897309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.257 [2024-11-29 18:41:59.905904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.257 [2024-11-29 18:41:59.906024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:40.257 [2024-11-29 18:41:59.906078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.556 ms 00:28:40.257 [2024-11-29 18:41:59.906098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.257 [2024-11-29 18:41:59.906143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.257 [2024-11-29 18:41:59.906162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:40.257 [2024-11-29 18:41:59.906326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:40.257 [2024-11-29 18:41:59.906351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.916192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.916284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:40.258 [2024-11-29 18:41:59.916321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.780 ms 00:28:40.258 [2024-11-29 18:41:59.916339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.916376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.916398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:40.258 [2024-11-29 18:41:59.916413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:40.258 [2024-11-29 18:41:59.916429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.916525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.916549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:40.258 [2024-11-29 18:41:59.916602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:40.258 [2024-11-29 18:41:59.916619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.916669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.916690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:40.258 [2024-11-29 18:41:59.916943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:40.258 [2024-11-29 18:41:59.916963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.923291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.923371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:40.258 [2024-11-29 18:41:59.923409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.302 ms 00:28:40.258 [2024-11-29 18:41:59.923427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.923518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.923541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:40.258 [2024-11-29 18:41:59.923594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:40.258 [2024-11-29 18:41:59.923611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.938107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.938234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:40.258 [2024-11-29 18:41:59.938296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.472 ms 00:28:40.258 [2024-11-29 18:41:59.938320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.939800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.939907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:40.258 [2024-11-29 18:41:59.939975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:28:40.258 [2024-11-29 18:41:59.940004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.957957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.958089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:40.258 [2024-11-29 18:41:59.958136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.887 ms 00:28:40.258 [2024-11-29 18:41:59.958156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.958286] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:40.258 [2024-11-29 18:41:59.958399] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:40.258 [2024-11-29 18:41:59.958594] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:40.258 [2024-11-29 18:41:59.958704] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:40.258 [2024-11-29 18:41:59.958731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.958747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:40.258 [2024-11-29 18:41:59.959014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:28:40.258 [2024-11-29 18:41:59.959048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.959124] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:40.258 [2024-11-29 18:41:59.959225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.959244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:40.258 [2024-11-29 18:41:59.959260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:28:40.258 [2024-11-29 18:41:59.959299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.962769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.962861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:40.258 [2024-11-29 18:41:59.962906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.437 ms 00:28:40.258 [2024-11-29 18:41:59.962924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.963492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.963564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:40.258 [2024-11-29 18:41:59.963616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:40.258 [2024-11-29 18:41:59.963634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.258 [2024-11-29 18:41:59.963692] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:40.258 [2024-11-29 18:41:59.963871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.258 [2024-11-29 18:41:59.963896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:40.258 [2024-11-29 18:41:59.963915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:28:40.258 [2024-11-29 18:41:59.963970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.828 [2024-11-29 18:42:00.495145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.828 [2024-11-29 18:42:00.495220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:40.828 [2024-11-29 18:42:00.495238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 530.903 ms 00:28:40.828 [2024-11-29 18:42:00.495247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.828 [2024-11-29 18:42:00.496856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.828 [2024-11-29 18:42:00.496896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:40.828 [2024-11-29 18:42:00.496912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.126 ms 00:28:40.828 [2024-11-29 18:42:00.496921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.828 [2024-11-29 18:42:00.497472] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:40.828 [2024-11-29 18:42:00.497504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.828 [2024-11-29 18:42:00.497513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:40.828 [2024-11-29 18:42:00.497534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:28:40.828 [2024-11-29 18:42:00.497542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.828 [2024-11-29 18:42:00.497574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.828 [2024-11-29 18:42:00.497586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:40.828 [2024-11-29 18:42:00.497595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:40.828 [2024-11-29 18:42:00.497603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.828 [2024-11-29 18:42:00.497637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 533.942 ms, result 0 00:28:40.828 [2024-11-29 18:42:00.497679] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:40.828 [2024-11-29 18:42:00.497781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.828 [2024-11-29 18:42:00.497790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:40.828 [2024-11-29 18:42:00.497798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.103 ms 00:28:40.828 [2024-11-29 18:42:00.497806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.256047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.256155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:41.399 [2024-11-29 18:42:01.256175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 757.806 ms 00:28:41.399 [2024-11-29 18:42:01.256185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.258664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.258958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:41.399 [2024-11-29 18:42:01.258980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.751 ms 00:28:41.399 [2024-11-29 18:42:01.258989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.260277] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:41.399 [2024-11-29 18:42:01.260333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.260343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:41.399 [2024-11-29 18:42:01.260353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.218 ms 00:28:41.399 [2024-11-29 18:42:01.260361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.260405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.260416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:41.399 [2024-11-29 18:42:01.260426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:41.399 [2024-11-29 18:42:01.260434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.260501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 762.810 ms, result 0 00:28:41.399 [2024-11-29 18:42:01.260557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:41.399 [2024-11-29 18:42:01.260569] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:41.399 [2024-11-29 18:42:01.260582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.260591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:41.399 [2024-11-29 18:42:01.260606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1296.904 ms 00:28:41.399 [2024-11-29 18:42:01.260614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.260649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.260660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:41.399 [2024-11-29 18:42:01.260669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:41.399 [2024-11-29 18:42:01.260677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.269420] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:41.399 [2024-11-29 18:42:01.269565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.269581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:41.399 [2024-11-29 18:42:01.269590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.871 ms 00:28:41.399 [2024-11-29 18:42:01.269598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.270199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.270230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:41.399 [2024-11-29 18:42:01.270240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:28:41.399 [2024-11-29 18:42:01.270247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.271931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.271955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:41.399 [2024-11-29 18:42:01.271964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.666 ms 00:28:41.399 [2024-11-29 18:42:01.271972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.272023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.272031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:41.399 [2024-11-29 18:42:01.272048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:41.399 [2024-11-29 18:42:01.272055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.272160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.272171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:41.399 [2024-11-29 18:42:01.272181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:41.399 [2024-11-29 18:42:01.272188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.272209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.272217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:41.399 [2024-11-29 18:42:01.272224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:41.399 [2024-11-29 18:42:01.272235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.272266] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:41.399 [2024-11-29 18:42:01.272275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.272281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:41.399 [2024-11-29 18:42:01.272293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:41.399 [2024-11-29 18:42:01.272300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.272352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.399 [2024-11-29 18:42:01.272361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:41.399 [2024-11-29 18:42:01.272368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:41.399 [2024-11-29 18:42:01.272375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.399 [2024-11-29 18:42:01.273710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1383.850 ms, result 0 00:28:41.399 [2024-11-29 18:42:01.286063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.399 [2024-11-29 18:42:01.302086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:41.657 [2024-11-29 18:42:01.310215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:41.914 Validate MD5 checksum, iteration 1 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:41.914 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:41.915 18:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:42.172 [2024-11-29 18:42:01.882481] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:42.172 [2024-11-29 18:42:01.882800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94249 ] 00:28:42.172 [2024-11-29 18:42:02.036523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.172 [2024-11-29 18:42:02.053225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.572  [2024-11-29T18:42:04.050Z] Copying: 695/1024 [MB] (695 MBps) [2024-11-29T18:42:10.626Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:28:50.721 00:28:50.721 18:42:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:50.721 18:42:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:52.624 Validate MD5 checksum, iteration 2 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e7ca00eeffea492912e7ebf945bc2a95 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e7ca00eeffea492912e7ebf945bc2a95 != \e\7\c\a\0\0\e\e\f\f\e\a\4\9\2\9\1\2\e\7\e\b\f\9\4\5\b\c\2\a\9\5 ]] 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:52.624 18:42:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.624 [2024-11-29 18:42:12.166958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:52.624 [2024-11-29 18:42:12.167225] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94365 ] 00:28:52.624 [2024-11-29 18:42:12.324775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.624 [2024-11-29 18:42:12.344268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.060  [2024-11-29T18:42:14.537Z] Copying: 543/1024 [MB] (543 MBps) [2024-11-29T18:42:15.107Z] Copying: 1024/1024 [MB] (average 572 MBps) 00:28:55.202 00:28:55.202 18:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:55.202 18:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:57.109 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4be4f97d1d37d7584d756bc7ef9510aa 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4be4f97d1d37d7584d756bc7ef9510aa != \4\b\e\4\f\9\7\d\1\d\3\7\d\7\5\8\4\d\7\5\6\b\c\7\e\f\9\5\1\0\a\a ]] 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:57.110 18:42:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94214 ]] 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94214 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94214 ']' 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94214 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94214 00:28:57.371 killing process with pid 94214 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94214' 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94214 00:28:57.371 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94214 00:28:57.371 [2024-11-29 18:42:17.191102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:57.371 [2024-11-29 18:42:17.194830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.371 [2024-11-29 18:42:17.194864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:57.371 [2024-11-29 18:42:17.194875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:57.371 [2024-11-29 18:42:17.194882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.371 [2024-11-29 18:42:17.194902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:57.371 [2024-11-29 18:42:17.195430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.371 [2024-11-29 18:42:17.195463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:57.371 [2024-11-29 18:42:17.195472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:28:57.371 [2024-11-29 18:42:17.195478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.371 [2024-11-29 18:42:17.195673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.371 [2024-11-29 18:42:17.195682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:57.371 [2024-11-29 18:42:17.195690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:28:57.371 [2024-11-29 18:42:17.195697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.371 [2024-11-29 18:42:17.197159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.371 [2024-11-29 18:42:17.197302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:57.371 [2024-11-29 18:42:17.197315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.449 ms 00:28:57.371 [2024-11-29 18:42:17.197327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.371 [2024-11-29 18:42:17.198275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.371 [2024-11-29 18:42:17.198296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:57.371 [2024-11-29 18:42:17.198304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:28:57.372 [2024-11-29 18:42:17.198311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.200373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.200500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:57.372 [2024-11-29 18:42:17.200516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.032 ms 00:28:57.372 [2024-11-29 18:42:17.200523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.201889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.201917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:57.372 [2024-11-29 18:42:17.201925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.338 ms 00:28:57.372 [2024-11-29 18:42:17.201931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.201993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.202001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:57.372 [2024-11-29 18:42:17.202017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:57.372 [2024-11-29 18:42:17.202031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.203505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.203605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:57.372 [2024-11-29 18:42:17.203616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.461 ms 00:28:57.372 [2024-11-29 18:42:17.203622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.204899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.204925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:57.372 [2024-11-29 18:42:17.204932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.252 ms 00:28:57.372 [2024-11-29 18:42:17.204938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.206498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.206522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:57.372 [2024-11-29 18:42:17.206529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.534 ms 00:28:57.372 [2024-11-29 18:42:17.206536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.208149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.208175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:57.372 [2024-11-29 18:42:17.208183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.564 ms 00:28:57.372 [2024-11-29 18:42:17.208189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.208214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:57.372 [2024-11-29 18:42:17.208226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:57.372 [2024-11-29 18:42:17.208235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:57.372 [2024-11-29 18:42:17.208241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:57.372 [2024-11-29 18:42:17.208248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:57.372 [2024-11-29 18:42:17.208339] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:57.372 [2024-11-29 18:42:17.208346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: eb885eaf-d12b-477b-802c-fba77e1cc9d7 00:28:57.372 [2024-11-29 18:42:17.208352] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:57.372 [2024-11-29 18:42:17.208358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:57.372 [2024-11-29 18:42:17.208363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:57.372 [2024-11-29 18:42:17.208375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:57.372 [2024-11-29 18:42:17.208381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:57.372 [2024-11-29 18:42:17.208390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:57.372 [2024-11-29 18:42:17.208398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:57.372 [2024-11-29 18:42:17.208405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:57.372 [2024-11-29 18:42:17.208411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:57.372 [2024-11-29 18:42:17.208418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.208425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:57.372 [2024-11-29 18:42:17.208432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:28:57.372 [2024-11-29 18:42:17.208438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.210145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.210166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:57.372 [2024-11-29 18:42:17.210173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.693 ms 00:28:57.372 [2024-11-29 18:42:17.210180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.210272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.372 [2024-11-29 18:42:17.210279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:57.372 [2024-11-29 18:42:17.210285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:28:57.372 [2024-11-29 18:42:17.210291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.216488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.372 [2024-11-29 18:42:17.216632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:57.372 [2024-11-29 18:42:17.216680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.372 [2024-11-29 18:42:17.216703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.216738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.372 [2024-11-29 18:42:17.216756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:57.372 [2024-11-29 18:42:17.216773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.372 [2024-11-29 18:42:17.216793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.216871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.372 [2024-11-29 18:42:17.216891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:57.372 [2024-11-29 18:42:17.216920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.372 [2024-11-29 18:42:17.216962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.372 [2024-11-29 18:42:17.216992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.372 [2024-11-29 18:42:17.217009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:57.373 [2024-11-29 18:42:17.217024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.217039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.228230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.228544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:57.373 [2024-11-29 18:42:17.228635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.228655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.237266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.237376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:57.373 [2024-11-29 18:42:17.237419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.237437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.237529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.237552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:57.373 [2024-11-29 18:42:17.237568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.237583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.237672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.237696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:57.373 [2024-11-29 18:42:17.237713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.237728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.237834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.237855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:57.373 [2024-11-29 18:42:17.237871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.237887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.237965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.237992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:57.373 [2024-11-29 18:42:17.238020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.238038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.238085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.238167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:57.373 [2024-11-29 18:42:17.238183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.238198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.238255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:57.373 [2024-11-29 18:42:17.238306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:57.373 [2024-11-29 18:42:17.238326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:57.373 [2024-11-29 18:42:17.238340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.373 [2024-11-29 18:42:17.238486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.616 ms, result 0 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:57.635 Remove shared memory files 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94027 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:57.635 ************************************ 00:28:57.635 END TEST ftl_upgrade_shutdown 00:28:57.635 ************************************ 00:28:57.635 00:28:57.635 real 1m17.519s 00:28:57.635 user 1m41.456s 00:28:57.635 sys 0m21.404s 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.635 18:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.635 18:42:17 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:57.635 18:42:17 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:57.635 18:42:17 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:57.635 18:42:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.635 18:42:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:57.635 ************************************ 00:28:57.635 START TEST ftl_restore_fast 00:28:57.635 ************************************ 00:28:57.635 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:57.635 * Looking for test storage... 00:28:57.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.896 --rc genhtml_branch_coverage=1 00:28:57.896 --rc genhtml_function_coverage=1 00:28:57.896 --rc genhtml_legend=1 00:28:57.896 --rc geninfo_all_blocks=1 00:28:57.896 --rc geninfo_unexecuted_blocks=1 00:28:57.896 00:28:57.896 ' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.896 --rc genhtml_branch_coverage=1 00:28:57.896 --rc genhtml_function_coverage=1 00:28:57.896 --rc genhtml_legend=1 00:28:57.896 --rc geninfo_all_blocks=1 00:28:57.896 --rc geninfo_unexecuted_blocks=1 00:28:57.896 00:28:57.896 ' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.896 --rc genhtml_branch_coverage=1 00:28:57.896 --rc genhtml_function_coverage=1 00:28:57.896 --rc genhtml_legend=1 00:28:57.896 --rc geninfo_all_blocks=1 00:28:57.896 --rc geninfo_unexecuted_blocks=1 00:28:57.896 00:28:57.896 ' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.896 --rc genhtml_branch_coverage=1 00:28:57.896 --rc genhtml_function_coverage=1 00:28:57.896 --rc genhtml_legend=1 00:28:57.896 --rc geninfo_all_blocks=1 00:28:57.896 --rc geninfo_unexecuted_blocks=1 00:28:57.896 00:28:57.896 ' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:57.896 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:57.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.r32ICe457G 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94498 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94498 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94498 ']' 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:57.897 18:42:17 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.897 [2024-11-29 18:42:17.716827] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:28:57.897 [2024-11-29 18:42:17.717073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94498 ] 00:28:58.157 [2024-11-29 18:42:17.872368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.157 [2024-11-29 18:42:17.890071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:58.727 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:58.988 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:59.249 { 00:28:59.249 "name": "nvme0n1", 00:28:59.249 "aliases": [ 00:28:59.249 "e229a4b8-36fc-41c3-999c-e5e2acf13c2c" 00:28:59.249 ], 00:28:59.249 "product_name": "NVMe disk", 00:28:59.249 "block_size": 4096, 00:28:59.249 "num_blocks": 1310720, 00:28:59.249 "uuid": "e229a4b8-36fc-41c3-999c-e5e2acf13c2c", 00:28:59.249 "numa_id": -1, 00:28:59.249 "assigned_rate_limits": { 00:28:59.249 "rw_ios_per_sec": 0, 00:28:59.249 "rw_mbytes_per_sec": 0, 00:28:59.249 "r_mbytes_per_sec": 0, 00:28:59.249 "w_mbytes_per_sec": 0 00:28:59.249 }, 00:28:59.249 "claimed": true, 00:28:59.249 "claim_type": "read_many_write_one", 00:28:59.249 "zoned": false, 00:28:59.249 "supported_io_types": { 00:28:59.249 "read": true, 00:28:59.249 "write": true, 00:28:59.249 "unmap": true, 00:28:59.249 "flush": true, 00:28:59.249 "reset": true, 00:28:59.249 "nvme_admin": true, 00:28:59.249 "nvme_io": true, 00:28:59.249 "nvme_io_md": false, 00:28:59.249 "write_zeroes": true, 00:28:59.249 "zcopy": false, 00:28:59.249 "get_zone_info": false, 00:28:59.249 "zone_management": false, 00:28:59.249 "zone_append": false, 00:28:59.249 "compare": true, 00:28:59.249 "compare_and_write": false, 00:28:59.249 "abort": true, 00:28:59.249 "seek_hole": false, 00:28:59.249 "seek_data": false, 00:28:59.249 "copy": true, 00:28:59.249 "nvme_iov_md": false 00:28:59.249 }, 00:28:59.249 "driver_specific": { 00:28:59.249 "nvme": [ 00:28:59.249 { 00:28:59.249 "pci_address": "0000:00:11.0", 00:28:59.249 "trid": { 00:28:59.249 "trtype": "PCIe", 00:28:59.249 "traddr": "0000:00:11.0" 00:28:59.249 }, 00:28:59.249 "ctrlr_data": { 00:28:59.249 "cntlid": 0, 00:28:59.249 "vendor_id": "0x1b36", 00:28:59.249 "model_number": "QEMU NVMe Ctrl", 00:28:59.249 "serial_number": "12341", 00:28:59.249 "firmware_revision": "8.0.0", 00:28:59.249 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:59.249 "oacs": { 00:28:59.249 "security": 0, 00:28:59.249 "format": 1, 00:28:59.249 "firmware": 0, 00:28:59.249 "ns_manage": 1 00:28:59.249 }, 00:28:59.249 "multi_ctrlr": false, 00:28:59.249 "ana_reporting": false 00:28:59.249 }, 00:28:59.249 "vs": { 00:28:59.249 "nvme_version": "1.4" 00:28:59.249 }, 00:28:59.249 "ns_data": { 00:28:59.249 "id": 1, 00:28:59.249 "can_share": false 00:28:59.249 } 00:28:59.249 } 00:28:59.249 ], 00:28:59.249 "mp_policy": "active_passive" 00:28:59.249 } 00:28:59.249 } 00:28:59.249 ]' 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:59.249 18:42:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.510 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=561cb08b-1933-4e79-8dc1-7fc231005a2b 00:28:59.510 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:59.510 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 561cb08b-1933-4e79-8dc1-7fc231005a2b 00:28:59.510 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:59.771 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0efa006f-1dd1-4956-8927-aa8a0d7b8927 00:28:59.771 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0efa006f-1dd1-4956-8927-aa8a0d7b8927 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:00.032 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.294 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:00.294 { 00:29:00.294 "name": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:00.294 "aliases": [ 00:29:00.294 "lvs/nvme0n1p0" 00:29:00.294 ], 00:29:00.294 "product_name": "Logical Volume", 00:29:00.294 "block_size": 4096, 00:29:00.294 "num_blocks": 26476544, 00:29:00.294 "uuid": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:00.294 "assigned_rate_limits": { 00:29:00.294 "rw_ios_per_sec": 0, 00:29:00.294 "rw_mbytes_per_sec": 0, 00:29:00.294 "r_mbytes_per_sec": 0, 00:29:00.294 "w_mbytes_per_sec": 0 00:29:00.294 }, 00:29:00.294 "claimed": false, 00:29:00.294 "zoned": false, 00:29:00.294 "supported_io_types": { 00:29:00.294 "read": true, 00:29:00.294 "write": true, 00:29:00.294 "unmap": true, 00:29:00.294 "flush": false, 00:29:00.294 "reset": true, 00:29:00.294 "nvme_admin": false, 00:29:00.294 "nvme_io": false, 00:29:00.294 "nvme_io_md": false, 00:29:00.294 "write_zeroes": true, 00:29:00.294 "zcopy": false, 00:29:00.294 "get_zone_info": false, 00:29:00.294 "zone_management": false, 00:29:00.294 "zone_append": false, 00:29:00.294 "compare": false, 00:29:00.294 "compare_and_write": false, 00:29:00.294 "abort": false, 00:29:00.294 "seek_hole": true, 00:29:00.294 "seek_data": true, 00:29:00.294 "copy": false, 00:29:00.294 "nvme_iov_md": false 00:29:00.294 }, 00:29:00.294 "driver_specific": { 00:29:00.294 "lvol": { 00:29:00.294 "lvol_store_uuid": "0efa006f-1dd1-4956-8927-aa8a0d7b8927", 00:29:00.294 "base_bdev": "nvme0n1", 00:29:00.294 "thin_provision": true, 00:29:00.294 "num_allocated_clusters": 0, 00:29:00.294 "snapshot": false, 00:29:00.294 "clone": false, 00:29:00.294 "esnap_clone": false 00:29:00.294 } 00:29:00.294 } 00:29:00.294 } 00:29:00.294 ]' 00:29:00.294 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:00.294 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:00.294 18:42:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:00.294 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.553 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:00.553 { 00:29:00.553 "name": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:00.553 "aliases": [ 00:29:00.553 "lvs/nvme0n1p0" 00:29:00.553 ], 00:29:00.553 "product_name": "Logical Volume", 00:29:00.553 "block_size": 4096, 00:29:00.553 "num_blocks": 26476544, 00:29:00.553 "uuid": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:00.554 "assigned_rate_limits": { 00:29:00.554 "rw_ios_per_sec": 0, 00:29:00.554 "rw_mbytes_per_sec": 0, 00:29:00.554 "r_mbytes_per_sec": 0, 00:29:00.554 "w_mbytes_per_sec": 0 00:29:00.554 }, 00:29:00.554 "claimed": false, 00:29:00.554 "zoned": false, 00:29:00.554 "supported_io_types": { 00:29:00.554 "read": true, 00:29:00.554 "write": true, 00:29:00.554 "unmap": true, 00:29:00.554 "flush": false, 00:29:00.554 "reset": true, 00:29:00.554 "nvme_admin": false, 00:29:00.554 "nvme_io": false, 00:29:00.554 "nvme_io_md": false, 00:29:00.554 "write_zeroes": true, 00:29:00.554 "zcopy": false, 00:29:00.554 "get_zone_info": false, 00:29:00.554 "zone_management": false, 00:29:00.554 "zone_append": false, 00:29:00.554 "compare": false, 00:29:00.554 "compare_and_write": false, 00:29:00.554 "abort": false, 00:29:00.554 "seek_hole": true, 00:29:00.554 "seek_data": true, 00:29:00.554 "copy": false, 00:29:00.554 "nvme_iov_md": false 00:29:00.554 }, 00:29:00.554 "driver_specific": { 00:29:00.554 "lvol": { 00:29:00.554 "lvol_store_uuid": "0efa006f-1dd1-4956-8927-aa8a0d7b8927", 00:29:00.554 "base_bdev": "nvme0n1", 00:29:00.554 "thin_provision": true, 00:29:00.554 "num_allocated_clusters": 0, 00:29:00.554 "snapshot": false, 00:29:00.554 "clone": false, 00:29:00.554 "esnap_clone": false 00:29:00.554 } 00:29:00.554 } 00:29:00.554 } 00:29:00.554 ]' 00:29:00.554 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:00.554 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:00.554 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e36a243d-d165-486b-814c-eb56600d8d26 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:00.814 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e36a243d-d165-486b-814c-eb56600d8d26 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:01.075 { 00:29:01.075 "name": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:01.075 "aliases": [ 00:29:01.075 "lvs/nvme0n1p0" 00:29:01.075 ], 00:29:01.075 "product_name": "Logical Volume", 00:29:01.075 "block_size": 4096, 00:29:01.075 "num_blocks": 26476544, 00:29:01.075 "uuid": "e36a243d-d165-486b-814c-eb56600d8d26", 00:29:01.075 "assigned_rate_limits": { 00:29:01.075 "rw_ios_per_sec": 0, 00:29:01.075 "rw_mbytes_per_sec": 0, 00:29:01.075 "r_mbytes_per_sec": 0, 00:29:01.075 "w_mbytes_per_sec": 0 00:29:01.075 }, 00:29:01.075 "claimed": false, 00:29:01.075 "zoned": false, 00:29:01.075 "supported_io_types": { 00:29:01.075 "read": true, 00:29:01.075 "write": true, 00:29:01.075 "unmap": true, 00:29:01.075 "flush": false, 00:29:01.075 "reset": true, 00:29:01.075 "nvme_admin": false, 00:29:01.075 "nvme_io": false, 00:29:01.075 "nvme_io_md": false, 00:29:01.075 "write_zeroes": true, 00:29:01.075 "zcopy": false, 00:29:01.075 "get_zone_info": false, 00:29:01.075 "zone_management": false, 00:29:01.075 "zone_append": false, 00:29:01.075 "compare": false, 00:29:01.075 "compare_and_write": false, 00:29:01.075 "abort": false, 00:29:01.075 "seek_hole": true, 00:29:01.075 "seek_data": true, 00:29:01.075 "copy": false, 00:29:01.075 "nvme_iov_md": false 00:29:01.075 }, 00:29:01.075 "driver_specific": { 00:29:01.075 "lvol": { 00:29:01.075 "lvol_store_uuid": "0efa006f-1dd1-4956-8927-aa8a0d7b8927", 00:29:01.075 "base_bdev": "nvme0n1", 00:29:01.075 "thin_provision": true, 00:29:01.075 "num_allocated_clusters": 0, 00:29:01.075 "snapshot": false, 00:29:01.075 "clone": false, 00:29:01.075 "esnap_clone": false 00:29:01.075 } 00:29:01.075 } 00:29:01.075 } 00:29:01.075 ]' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e36a243d-d165-486b-814c-eb56600d8d26 --l2p_dram_limit 10' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:01.075 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e36a243d-d165-486b-814c-eb56600d8d26 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:01.337 [2024-11-29 18:42:21.136476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.136516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:01.337 [2024-11-29 18:42:21.136526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:01.337 [2024-11-29 18:42:21.136534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.136575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.136589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:01.337 [2024-11-29 18:42:21.136596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:01.337 [2024-11-29 18:42:21.136606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.136620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:01.337 [2024-11-29 18:42:21.136804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:01.337 [2024-11-29 18:42:21.136819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.136829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:01.337 [2024-11-29 18:42:21.136835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:29:01.337 [2024-11-29 18:42:21.136842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.136866] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 941bac55-3541-4ffc-a49c-047f115f854f 00:29:01.337 [2024-11-29 18:42:21.137923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.137946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:01.337 [2024-11-29 18:42:21.137956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:01.337 [2024-11-29 18:42:21.137962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.142838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.142864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:01.337 [2024-11-29 18:42:21.142874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:29:01.337 [2024-11-29 18:42:21.142880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.142942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.142948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:01.337 [2024-11-29 18:42:21.142956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:01.337 [2024-11-29 18:42:21.142961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.142992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.142999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:01.337 [2024-11-29 18:42:21.143007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.337 [2024-11-29 18:42:21.143012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.143030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:01.337 [2024-11-29 18:42:21.144307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.144336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:01.337 [2024-11-29 18:42:21.144343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:29:01.337 [2024-11-29 18:42:21.144350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.144377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.337 [2024-11-29 18:42:21.144385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:01.337 [2024-11-29 18:42:21.144391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.337 [2024-11-29 18:42:21.144399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.337 [2024-11-29 18:42:21.144412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:01.337 [2024-11-29 18:42:21.144536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:01.337 [2024-11-29 18:42:21.144546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:01.337 [2024-11-29 18:42:21.144556] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:01.337 [2024-11-29 18:42:21.144566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:01.337 [2024-11-29 18:42:21.144574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:01.337 [2024-11-29 18:42:21.144586] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:01.337 [2024-11-29 18:42:21.144593] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:01.337 [2024-11-29 18:42:21.144601] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:01.338 [2024-11-29 18:42:21.144608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:01.338 [2024-11-29 18:42:21.144617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.338 [2024-11-29 18:42:21.144624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:01.338 [2024-11-29 18:42:21.144631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:29:01.338 [2024-11-29 18:42:21.144640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.338 [2024-11-29 18:42:21.144704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.338 [2024-11-29 18:42:21.144713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:01.338 [2024-11-29 18:42:21.144719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:01.338 [2024-11-29 18:42:21.144727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.338 [2024-11-29 18:42:21.144798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:01.338 [2024-11-29 18:42:21.144810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:01.338 [2024-11-29 18:42:21.144818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:01.338 [2024-11-29 18:42:21.144839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:01.338 [2024-11-29 18:42:21.144856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.338 [2024-11-29 18:42:21.144867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:01.338 [2024-11-29 18:42:21.144873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:01.338 [2024-11-29 18:42:21.144877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.338 [2024-11-29 18:42:21.144885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:01.338 [2024-11-29 18:42:21.144890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:01.338 [2024-11-29 18:42:21.144897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:01.338 [2024-11-29 18:42:21.144907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:01.338 [2024-11-29 18:42:21.144924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:01.338 [2024-11-29 18:42:21.144941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:01.338 [2024-11-29 18:42:21.144958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:01.338 [2024-11-29 18:42:21.144980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:01.338 [2024-11-29 18:42:21.144986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.338 [2024-11-29 18:42:21.144993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:01.338 [2024-11-29 18:42:21.144998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:01.338 [2024-11-29 18:42:21.145005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.338 [2024-11-29 18:42:21.145010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:01.338 [2024-11-29 18:42:21.145017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:01.338 [2024-11-29 18:42:21.145023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.338 [2024-11-29 18:42:21.145030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:01.338 [2024-11-29 18:42:21.145035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:01.338 [2024-11-29 18:42:21.145042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.145048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:01.338 [2024-11-29 18:42:21.145054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:01.338 [2024-11-29 18:42:21.145060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.145068] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:01.338 [2024-11-29 18:42:21.145074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:01.338 [2024-11-29 18:42:21.145083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.338 [2024-11-29 18:42:21.145089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.338 [2024-11-29 18:42:21.145098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:01.338 [2024-11-29 18:42:21.145110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:01.338 [2024-11-29 18:42:21.145118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:01.338 [2024-11-29 18:42:21.145124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:01.338 [2024-11-29 18:42:21.145131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:01.338 [2024-11-29 18:42:21.145136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:01.338 [2024-11-29 18:42:21.145147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:01.338 [2024-11-29 18:42:21.145155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.338 [2024-11-29 18:42:21.145163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:01.338 [2024-11-29 18:42:21.145170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:01.338 [2024-11-29 18:42:21.145177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:01.338 [2024-11-29 18:42:21.145183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:01.338 [2024-11-29 18:42:21.145191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:01.338 [2024-11-29 18:42:21.145197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:01.338 [2024-11-29 18:42:21.145206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:01.338 [2024-11-29 18:42:21.145212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:01.338 [2024-11-29 18:42:21.145219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:01.338 [2024-11-29 18:42:21.145226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:01.338 [2024-11-29 18:42:21.145233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:01.339 [2024-11-29 18:42:21.145239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:01.339 [2024-11-29 18:42:21.145247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:01.339 [2024-11-29 18:42:21.145254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:01.339 [2024-11-29 18:42:21.145261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:01.339 [2024-11-29 18:42:21.145268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.339 [2024-11-29 18:42:21.145276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:01.339 [2024-11-29 18:42:21.145282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:01.339 [2024-11-29 18:42:21.145290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:01.339 [2024-11-29 18:42:21.145296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:01.339 [2024-11-29 18:42:21.145304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.339 [2024-11-29 18:42:21.145313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:01.339 [2024-11-29 18:42:21.145323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:29:01.339 [2024-11-29 18:42:21.145333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.339 [2024-11-29 18:42:21.145363] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:01.339 [2024-11-29 18:42:21.145373] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:05.548 [2024-11-29 18:42:24.710810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.710877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:05.548 [2024-11-29 18:42:24.710894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3565.428 ms 00:29:05.548 [2024-11-29 18:42:24.710903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.720557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.720746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.548 [2024-11-29 18:42:24.720768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.566 ms 00:29:05.548 [2024-11-29 18:42:24.720781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.720909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.720920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:05.548 [2024-11-29 18:42:24.720931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:05.548 [2024-11-29 18:42:24.720938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.730669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.730708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.548 [2024-11-29 18:42:24.730720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.698 ms 00:29:05.548 [2024-11-29 18:42:24.730730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.730767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.730775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.548 [2024-11-29 18:42:24.730785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:05.548 [2024-11-29 18:42:24.730793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.731198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.731215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.548 [2024-11-29 18:42:24.731226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:29:05.548 [2024-11-29 18:42:24.731234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.731348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.731358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.548 [2024-11-29 18:42:24.731369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:05.548 [2024-11-29 18:42:24.731378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.737780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.737815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.548 [2024-11-29 18:42:24.737826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.373 ms 00:29:05.548 [2024-11-29 18:42:24.737837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.759162] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:05.548 [2024-11-29 18:42:24.762888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.762937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:05.548 [2024-11-29 18:42:24.762951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.984 ms 00:29:05.548 [2024-11-29 18:42:24.762963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.846777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.846843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:05.548 [2024-11-29 18:42:24.846857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.770 ms 00:29:05.548 [2024-11-29 18:42:24.846871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.847068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.847082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:05.548 [2024-11-29 18:42:24.847092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:29:05.548 [2024-11-29 18:42:24.847102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.851495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.851544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:05.548 [2024-11-29 18:42:24.851559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:29:05.548 [2024-11-29 18:42:24.851571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.548 [2024-11-29 18:42:24.855291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.548 [2024-11-29 18:42:24.855339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:05.548 [2024-11-29 18:42:24.855350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:29:05.549 [2024-11-29 18:42:24.855361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.855724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.855744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:05.549 [2024-11-29 18:42:24.855753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:29:05.549 [2024-11-29 18:42:24.855766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.896368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.896429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:05.549 [2024-11-29 18:42:24.896446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.581 ms 00:29:05.549 [2024-11-29 18:42:24.896494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.901995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.902223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:05.549 [2024-11-29 18:42:24.902243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.442 ms 00:29:05.549 [2024-11-29 18:42:24.902255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.907283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.907334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:05.549 [2024-11-29 18:42:24.907345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:29:05.549 [2024-11-29 18:42:24.907354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.912429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.912499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:05.549 [2024-11-29 18:42:24.912511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:29:05.549 [2024-11-29 18:42:24.912524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.912571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.912592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:05.549 [2024-11-29 18:42:24.912601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:05.549 [2024-11-29 18:42:24.912611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.912699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:24.912712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:05.549 [2024-11-29 18:42:24.912726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:05.549 [2024-11-29 18:42:24.912739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:24.914113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3777.146 ms, result 0 00:29:05.549 { 00:29:05.549 "name": "ftl0", 00:29:05.549 "uuid": "941bac55-3541-4ffc-a49c-047f115f854f" 00:29:05.549 } 00:29:05.549 18:42:24 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:05.549 18:42:24 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:05.549 18:42:25 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:05.549 18:42:25 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:05.549 [2024-11-29 18:42:25.361268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.361324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:05.549 [2024-11-29 18:42:25.361341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:05.549 [2024-11-29 18:42:25.361350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.361378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:05.549 [2024-11-29 18:42:25.362141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.362183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:05.549 [2024-11-29 18:42:25.362196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:29:05.549 [2024-11-29 18:42:25.362211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.362503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.362525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:05.549 [2024-11-29 18:42:25.362539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:29:05.549 [2024-11-29 18:42:25.362551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.365785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.365807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:05.549 [2024-11-29 18:42:25.365822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:29:05.549 [2024-11-29 18:42:25.365831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.372067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.372253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:05.549 [2024-11-29 18:42:25.372273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:29:05.549 [2024-11-29 18:42:25.372286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.375356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.375542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:05.549 [2024-11-29 18:42:25.375561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:29:05.549 [2024-11-29 18:42:25.375571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.382413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.382498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:05.549 [2024-11-29 18:42:25.382512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.718 ms 00:29:05.549 [2024-11-29 18:42:25.382526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.382672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.382688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:05.549 [2024-11-29 18:42:25.382698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:05.549 [2024-11-29 18:42:25.382708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.385354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.385389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:05.549 [2024-11-29 18:42:25.385398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:29:05.549 [2024-11-29 18:42:25.385407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.387022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.387060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:05.549 [2024-11-29 18:42:25.387068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:29:05.549 [2024-11-29 18:42:25.387077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.388411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.388444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:05.549 [2024-11-29 18:42:25.388470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:29:05.549 [2024-11-29 18:42:25.388479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.390063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.549 [2024-11-29 18:42:25.390184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:05.549 [2024-11-29 18:42:25.390198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:29:05.549 [2024-11-29 18:42:25.390208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.549 [2024-11-29 18:42:25.390237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.549 [2024-11-29 18:42:25.390252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.549 [2024-11-29 18:42:25.390424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.390996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.550 [2024-11-29 18:42:25.391120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.550 [2024-11-29 18:42:25.391127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 941bac55-3541-4ffc-a49c-047f115f854f 00:29:05.550 [2024-11-29 18:42:25.391137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:05.550 [2024-11-29 18:42:25.391145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:05.550 [2024-11-29 18:42:25.391153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:05.550 [2024-11-29 18:42:25.391160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:05.550 [2024-11-29 18:42:25.391170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.550 [2024-11-29 18:42:25.391178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.550 [2024-11-29 18:42:25.391187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.550 [2024-11-29 18:42:25.391194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.550 [2024-11-29 18:42:25.391201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.550 [2024-11-29 18:42:25.391209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.550 [2024-11-29 18:42:25.391217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.550 [2024-11-29 18:42:25.391225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:29:05.551 [2024-11-29 18:42:25.391234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.392682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.551 [2024-11-29 18:42:25.392704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.551 [2024-11-29 18:42:25.392714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:29:05.551 [2024-11-29 18:42:25.392724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.392798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.551 [2024-11-29 18:42:25.392808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.551 [2024-11-29 18:42:25.392816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:05.551 [2024-11-29 18:42:25.392824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.397975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.398025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.551 [2024-11-29 18:42:25.398034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.398044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.398097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.398106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.551 [2024-11-29 18:42:25.398114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.398123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.398183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.398197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.551 [2024-11-29 18:42:25.398205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.398215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.398231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.398245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.551 [2024-11-29 18:42:25.398252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.398261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.407263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.407307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.551 [2024-11-29 18:42:25.407319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.407329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.414962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.551 [2024-11-29 18:42:25.415016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.551 [2024-11-29 18:42:25.415087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.551 [2024-11-29 18:42:25.415168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.551 [2024-11-29 18:42:25.415259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:05.551 [2024-11-29 18:42:25.415322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.551 [2024-11-29 18:42:25.415385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.551 [2024-11-29 18:42:25.415449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.551 [2024-11-29 18:42:25.415477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.551 [2024-11-29 18:42:25.415487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.551 [2024-11-29 18:42:25.415613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.319 ms, result 0 00:29:05.551 true 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94498 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94498 ']' 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94498 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.551 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94498 00:29:05.812 killing process with pid 94498 00:29:05.812 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.812 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.812 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94498' 00:29:05.812 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94498 00:29:05.812 18:42:25 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94498 00:29:11.109 18:42:30 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:15.317 262144+0 records in 00:29:15.317 262144+0 records out 00:29:15.317 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.27342 s, 251 MB/s 00:29:15.317 18:42:34 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:17.230 18:42:37 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:17.230 [2024-11-29 18:42:37.117780] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:17.230 [2024-11-29 18:42:37.118144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94709 ] 00:29:17.491 [2024-11-29 18:42:37.277547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.491 [2024-11-29 18:42:37.298702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.754 [2024-11-29 18:42:37.409392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.754 [2024-11-29 18:42:37.409507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.754 [2024-11-29 18:42:37.571314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.571359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:17.754 [2024-11-29 18:42:37.571372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:17.754 [2024-11-29 18:42:37.571380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.571430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.571440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:17.754 [2024-11-29 18:42:37.571449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:17.754 [2024-11-29 18:42:37.571480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.571499] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:17.754 [2024-11-29 18:42:37.571731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:17.754 [2024-11-29 18:42:37.571748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.571756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:17.754 [2024-11-29 18:42:37.571766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:29:17.754 [2024-11-29 18:42:37.571777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.572832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:17.754 [2024-11-29 18:42:37.575516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.575548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:17.754 [2024-11-29 18:42:37.575563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:29:17.754 [2024-11-29 18:42:37.575574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.575629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.575641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:17.754 [2024-11-29 18:42:37.575650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:17.754 [2024-11-29 18:42:37.575657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.580643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.580674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:17.754 [2024-11-29 18:42:37.580686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:29:17.754 [2024-11-29 18:42:37.580694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.580777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.580787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:17.754 [2024-11-29 18:42:37.580794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:17.754 [2024-11-29 18:42:37.580804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.580837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.580845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:17.754 [2024-11-29 18:42:37.580853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:17.754 [2024-11-29 18:42:37.580862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.580882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:17.754 [2024-11-29 18:42:37.582233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.582260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:17.754 [2024-11-29 18:42:37.582270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:29:17.754 [2024-11-29 18:42:37.582277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.582303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.582311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:17.754 [2024-11-29 18:42:37.582319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:17.754 [2024-11-29 18:42:37.582330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.582353] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:17.754 [2024-11-29 18:42:37.582375] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:17.754 [2024-11-29 18:42:37.582419] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:17.754 [2024-11-29 18:42:37.582450] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:17.754 [2024-11-29 18:42:37.582573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:17.754 [2024-11-29 18:42:37.582583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:17.754 [2024-11-29 18:42:37.582596] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:17.754 [2024-11-29 18:42:37.582605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:17.754 [2024-11-29 18:42:37.582614] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:17.754 [2024-11-29 18:42:37.582622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:17.754 [2024-11-29 18:42:37.582630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:17.754 [2024-11-29 18:42:37.582636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:17.754 [2024-11-29 18:42:37.582643] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:17.754 [2024-11-29 18:42:37.582655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.582666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:17.754 [2024-11-29 18:42:37.582673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:29:17.754 [2024-11-29 18:42:37.582684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.582803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.754 [2024-11-29 18:42:37.582817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:17.754 [2024-11-29 18:42:37.582825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:17.754 [2024-11-29 18:42:37.582835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.754 [2024-11-29 18:42:37.582942] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:17.754 [2024-11-29 18:42:37.582953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:17.754 [2024-11-29 18:42:37.582962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.754 [2024-11-29 18:42:37.582975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.754 [2024-11-29 18:42:37.582983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:17.754 [2024-11-29 18:42:37.582991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:17.754 [2024-11-29 18:42:37.582999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:17.754 [2024-11-29 18:42:37.583007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:17.754 [2024-11-29 18:42:37.583015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.755 [2024-11-29 18:42:37.583038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:17.755 [2024-11-29 18:42:37.583052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:17.755 [2024-11-29 18:42:37.583059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.755 [2024-11-29 18:42:37.583068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:17.755 [2024-11-29 18:42:37.583075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:17.755 [2024-11-29 18:42:37.583087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:17.755 [2024-11-29 18:42:37.583103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:17.755 [2024-11-29 18:42:37.583135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:17.755 [2024-11-29 18:42:37.583161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:17.755 [2024-11-29 18:42:37.583190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:17.755 [2024-11-29 18:42:37.583223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:17.755 [2024-11-29 18:42:37.583252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.755 [2024-11-29 18:42:37.583267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:17.755 [2024-11-29 18:42:37.583274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:17.755 [2024-11-29 18:42:37.583281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.755 [2024-11-29 18:42:37.583289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:17.755 [2024-11-29 18:42:37.583297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:17.755 [2024-11-29 18:42:37.583304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:17.755 [2024-11-29 18:42:37.583318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:17.755 [2024-11-29 18:42:37.583325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:17.755 [2024-11-29 18:42:37.583342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:17.755 [2024-11-29 18:42:37.583349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.755 [2024-11-29 18:42:37.583364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:17.755 [2024-11-29 18:42:37.583370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:17.755 [2024-11-29 18:42:37.583377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:17.755 [2024-11-29 18:42:37.583383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:17.755 [2024-11-29 18:42:37.583390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:17.755 [2024-11-29 18:42:37.583396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:17.755 [2024-11-29 18:42:37.583404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:17.755 [2024-11-29 18:42:37.583412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:17.755 [2024-11-29 18:42:37.583428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:17.755 [2024-11-29 18:42:37.583434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:17.755 [2024-11-29 18:42:37.583441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:17.755 [2024-11-29 18:42:37.583470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:17.755 [2024-11-29 18:42:37.583478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:17.755 [2024-11-29 18:42:37.583485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:17.755 [2024-11-29 18:42:37.583492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:17.755 [2024-11-29 18:42:37.583499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:17.755 [2024-11-29 18:42:37.583506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:17.755 [2024-11-29 18:42:37.583539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:17.755 [2024-11-29 18:42:37.583548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:17.755 [2024-11-29 18:42:37.583563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:17.755 [2024-11-29 18:42:37.583570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:17.755 [2024-11-29 18:42:37.583577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:17.755 [2024-11-29 18:42:37.583587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.583594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:17.755 [2024-11-29 18:42:37.583601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:29:17.755 [2024-11-29 18:42:37.583610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.592526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.592561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:17.755 [2024-11-29 18:42:37.592572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.876 ms 00:29:17.755 [2024-11-29 18:42:37.592585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.592667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.592674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:17.755 [2024-11-29 18:42:37.592683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:17.755 [2024-11-29 18:42:37.592690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.608432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.608498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:17.755 [2024-11-29 18:42:37.608513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.690 ms 00:29:17.755 [2024-11-29 18:42:37.608523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.608568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.608580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:17.755 [2024-11-29 18:42:37.608599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:17.755 [2024-11-29 18:42:37.608608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.608995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.609030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:17.755 [2024-11-29 18:42:37.609042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:29:17.755 [2024-11-29 18:42:37.609051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.609211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.609224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:17.755 [2024-11-29 18:42:37.609234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:29:17.755 [2024-11-29 18:42:37.609245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.755 [2024-11-29 18:42:37.614979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.755 [2024-11-29 18:42:37.615131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.755 [2024-11-29 18:42:37.615149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:29:17.756 [2024-11-29 18:42:37.615159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.618339] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:17.756 [2024-11-29 18:42:37.618467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:17.756 [2024-11-29 18:42:37.618482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.618490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:17.756 [2024-11-29 18:42:37.618498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:29:17.756 [2024-11-29 18:42:37.618505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.633134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.633255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:17.756 [2024-11-29 18:42:37.633275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.596 ms 00:29:17.756 [2024-11-29 18:42:37.633283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.635198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.635225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:17.756 [2024-11-29 18:42:37.635234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:29:17.756 [2024-11-29 18:42:37.635241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.637053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.637083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:17.756 [2024-11-29 18:42:37.637092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.782 ms 00:29:17.756 [2024-11-29 18:42:37.637098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.637413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.637424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:17.756 [2024-11-29 18:42:37.637432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:29:17.756 [2024-11-29 18:42:37.637439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.756 [2024-11-29 18:42:37.654408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.756 [2024-11-29 18:42:37.654469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:17.756 [2024-11-29 18:42:37.654487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.942 ms 00:29:17.756 [2024-11-29 18:42:37.654495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.661967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:18.017 [2024-11-29 18:42:37.664283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.664399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:18.017 [2024-11-29 18:42:37.664419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.751 ms 00:29:18.017 [2024-11-29 18:42:37.664427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.664495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.664510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:18.017 [2024-11-29 18:42:37.664519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:18.017 [2024-11-29 18:42:37.664527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.664605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.664615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:18.017 [2024-11-29 18:42:37.664625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:18.017 [2024-11-29 18:42:37.664633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.664652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.664659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:18.017 [2024-11-29 18:42:37.664667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:18.017 [2024-11-29 18:42:37.664678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.664708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:18.017 [2024-11-29 18:42:37.664717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.664727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:18.017 [2024-11-29 18:42:37.664734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:18.017 [2024-11-29 18:42:37.664744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.668883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.668916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:18.017 [2024-11-29 18:42:37.668927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:29:18.017 [2024-11-29 18:42:37.668936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.669011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.017 [2024-11-29 18:42:37.669022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:18.017 [2024-11-29 18:42:37.669035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:18.017 [2024-11-29 18:42:37.669043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.017 [2024-11-29 18:42:37.669937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.242 ms, result 0 00:29:18.960  [2024-11-29T18:42:39.809Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-29T18:42:40.752Z] Copying: 41/1024 [MB] (21 MBps) [2024-11-29T18:42:41.696Z] Copying: 73/1024 [MB] (31 MBps) [2024-11-29T18:42:43.085Z] Copying: 112/1024 [MB] (39 MBps) [2024-11-29T18:42:44.032Z] Copying: 130/1024 [MB] (17 MBps) [2024-11-29T18:42:44.977Z] Copying: 148/1024 [MB] (18 MBps) [2024-11-29T18:42:45.930Z] Copying: 186/1024 [MB] (37 MBps) [2024-11-29T18:42:46.925Z] Copying: 224/1024 [MB] (38 MBps) [2024-11-29T18:42:47.867Z] Copying: 250/1024 [MB] (26 MBps) [2024-11-29T18:42:48.809Z] Copying: 283/1024 [MB] (32 MBps) [2024-11-29T18:42:49.805Z] Copying: 309/1024 [MB] (26 MBps) [2024-11-29T18:42:50.747Z] Copying: 343/1024 [MB] (34 MBps) [2024-11-29T18:42:51.689Z] Copying: 355/1024 [MB] (11 MBps) [2024-11-29T18:42:53.072Z] Copying: 370/1024 [MB] (15 MBps) [2024-11-29T18:42:54.014Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-29T18:42:54.958Z] Copying: 408/1024 [MB] (28 MBps) [2024-11-29T18:42:55.901Z] Copying: 432/1024 [MB] (23 MBps) [2024-11-29T18:42:56.844Z] Copying: 473/1024 [MB] (41 MBps) [2024-11-29T18:42:57.787Z] Copying: 515/1024 [MB] (41 MBps) [2024-11-29T18:42:58.730Z] Copying: 556/1024 [MB] (41 MBps) [2024-11-29T18:43:00.117Z] Copying: 598/1024 [MB] (42 MBps) [2024-11-29T18:43:00.689Z] Copying: 623/1024 [MB] (25 MBps) [2024-11-29T18:43:02.075Z] Copying: 646/1024 [MB] (22 MBps) [2024-11-29T18:43:03.020Z] Copying: 674/1024 [MB] (27 MBps) [2024-11-29T18:43:03.965Z] Copying: 700/1024 [MB] (25 MBps) [2024-11-29T18:43:04.911Z] Copying: 715/1024 [MB] (15 MBps) [2024-11-29T18:43:05.855Z] Copying: 733/1024 [MB] (17 MBps) [2024-11-29T18:43:06.800Z] Copying: 751/1024 [MB] (17 MBps) [2024-11-29T18:43:07.745Z] Copying: 770/1024 [MB] (19 MBps) [2024-11-29T18:43:08.690Z] Copying: 790/1024 [MB] (19 MBps) [2024-11-29T18:43:10.079Z] Copying: 810/1024 [MB] (20 MBps) [2024-11-29T18:43:11.021Z] Copying: 826/1024 [MB] (15 MBps) [2024-11-29T18:43:11.964Z] Copying: 842/1024 [MB] (16 MBps) [2024-11-29T18:43:12.908Z] Copying: 867/1024 [MB] (24 MBps) [2024-11-29T18:43:13.853Z] Copying: 880/1024 [MB] (13 MBps) [2024-11-29T18:43:14.796Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-29T18:43:15.797Z] Copying: 912/1024 [MB] (21 MBps) [2024-11-29T18:43:16.750Z] Copying: 941/1024 [MB] (28 MBps) [2024-11-29T18:43:17.694Z] Copying: 968/1024 [MB] (27 MBps) [2024-11-29T18:43:18.269Z] Copying: 1004/1024 [MB] (36 MBps) [2024-11-29T18:43:18.269Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-29 18:43:18.179409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.364 [2024-11-29 18:43:18.179445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:58.364 [2024-11-29 18:43:18.179471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:58.364 [2024-11-29 18:43:18.179483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.364 [2024-11-29 18:43:18.179500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:58.364 [2024-11-29 18:43:18.179860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.364 [2024-11-29 18:43:18.179875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:58.364 [2024-11-29 18:43:18.179882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:29:58.364 [2024-11-29 18:43:18.179888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.364 [2024-11-29 18:43:18.181252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.364 [2024-11-29 18:43:18.181353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:58.364 [2024-11-29 18:43:18.181373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:29:58.364 [2024-11-29 18:43:18.181379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.364 [2024-11-29 18:43:18.181403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.364 [2024-11-29 18:43:18.181409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:58.364 [2024-11-29 18:43:18.181415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:58.364 [2024-11-29 18:43:18.181425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.364 [2024-11-29 18:43:18.181469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.364 [2024-11-29 18:43:18.181477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:58.364 [2024-11-29 18:43:18.181484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:58.364 [2024-11-29 18:43:18.181489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.364 [2024-11-29 18:43:18.181499] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:58.364 [2024-11-29 18:43:18.181511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:58.364 [2024-11-29 18:43:18.181556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.181995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:58.365 [2024-11-29 18:43:18.182092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:58.366 [2024-11-29 18:43:18.182097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:58.366 [2024-11-29 18:43:18.182103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:58.366 [2024-11-29 18:43:18.182109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:58.366 [2024-11-29 18:43:18.182120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:58.366 [2024-11-29 18:43:18.182126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 941bac55-3541-4ffc-a49c-047f115f854f 00:29:58.366 [2024-11-29 18:43:18.182132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:58.366 [2024-11-29 18:43:18.182140] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:58.366 [2024-11-29 18:43:18.182146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:58.366 [2024-11-29 18:43:18.182151] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:58.366 [2024-11-29 18:43:18.182156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:58.366 [2024-11-29 18:43:18.182162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:58.366 [2024-11-29 18:43:18.182168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:58.366 [2024-11-29 18:43:18.182173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:58.366 [2024-11-29 18:43:18.182178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:58.366 [2024-11-29 18:43:18.182184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.366 [2024-11-29 18:43:18.182189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:58.366 [2024-11-29 18:43:18.182197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:29:58.366 [2024-11-29 18:43:18.182202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.183375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.366 [2024-11-29 18:43:18.183389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:58.366 [2024-11-29 18:43:18.183397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:29:58.366 [2024-11-29 18:43:18.183406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.183481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.366 [2024-11-29 18:43:18.183491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:58.366 [2024-11-29 18:43:18.183498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:58.366 [2024-11-29 18:43:18.183504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.187769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.187857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:58.366 [2024-11-29 18:43:18.187899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.187916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.187969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.188045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:58.366 [2024-11-29 18:43:18.188067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.188082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.188126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.188145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:58.366 [2024-11-29 18:43:18.188160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.188206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.188229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.188246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:58.366 [2024-11-29 18:43:18.188265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.188284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.195742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.195856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:58.366 [2024-11-29 18:43:18.195901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.195919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.201850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.201961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:58.366 [2024-11-29 18:43:18.202013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:58.366 [2024-11-29 18:43:18.202091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:58.366 [2024-11-29 18:43:18.202243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:58.366 [2024-11-29 18:43:18.202380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:58.366 [2024-11-29 18:43:18.202549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:58.366 [2024-11-29 18:43:18.202638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:58.366 [2024-11-29 18:43:18.202744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:58.366 [2024-11-29 18:43:18.202759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:58.366 [2024-11-29 18:43:18.202778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.366 [2024-11-29 18:43:18.202914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 23.481 ms, result 0 00:29:58.628 00:29:58.628 00:29:58.889 18:43:18 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:58.889 [2024-11-29 18:43:18.610180] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:58.889 [2024-11-29 18:43:18.610465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95132 ] 00:29:58.889 [2024-11-29 18:43:18.767533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.889 [2024-11-29 18:43:18.785356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.152 [2024-11-29 18:43:18.867005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.152 [2024-11-29 18:43:18.867202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.152 [2024-11-29 18:43:19.013383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.013516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.152 [2024-11-29 18:43:19.013569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:59.152 [2024-11-29 18:43:19.013589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.013639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.013663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.152 [2024-11-29 18:43:19.013684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:59.152 [2024-11-29 18:43:19.013698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.013725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.152 [2024-11-29 18:43:19.013969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.152 [2024-11-29 18:43:19.014175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.014198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.152 [2024-11-29 18:43:19.014216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:29:59.152 [2024-11-29 18:43:19.014230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.014464] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:59.152 [2024-11-29 18:43:19.014499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.014515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.152 [2024-11-29 18:43:19.014568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:59.152 [2024-11-29 18:43:19.014590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.014635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.014652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.152 [2024-11-29 18:43:19.014695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:59.152 [2024-11-29 18:43:19.014715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.014914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.014935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.152 [2024-11-29 18:43:19.014981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:29:59.152 [2024-11-29 18:43:19.015000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.015069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.015111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.152 [2024-11-29 18:43:19.015128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:59.152 [2024-11-29 18:43:19.015166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.015199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.015216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.152 [2024-11-29 18:43:19.015257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:59.152 [2024-11-29 18:43:19.015274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.015301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.152 [2024-11-29 18:43:19.016582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.016662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.152 [2024-11-29 18:43:19.016700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:29:59.152 [2024-11-29 18:43:19.016708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.016734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.152 [2024-11-29 18:43:19.016740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.152 [2024-11-29 18:43:19.016747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:59.152 [2024-11-29 18:43:19.016753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.152 [2024-11-29 18:43:19.016766] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.152 [2024-11-29 18:43:19.016782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.152 [2024-11-29 18:43:19.016808] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.152 [2024-11-29 18:43:19.016819] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:59.152 [2024-11-29 18:43:19.016898] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.152 [2024-11-29 18:43:19.016906] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.152 [2024-11-29 18:43:19.016914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:59.152 [2024-11-29 18:43:19.016925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.152 [2024-11-29 18:43:19.016934] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.152 [2024-11-29 18:43:19.016941] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.152 [2024-11-29 18:43:19.016946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.152 [2024-11-29 18:43:19.016952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.152 [2024-11-29 18:43:19.016958] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.152 [2024-11-29 18:43:19.016963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.016971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.153 [2024-11-29 18:43:19.016977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:29:59.153 [2024-11-29 18:43:19.016983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.017045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.017054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.153 [2024-11-29 18:43:19.017059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:59.153 [2024-11-29 18:43:19.017067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.017147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.153 [2024-11-29 18:43:19.017155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.153 [2024-11-29 18:43:19.017161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.153 [2024-11-29 18:43:19.017185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.153 [2024-11-29 18:43:19.017202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.153 [2024-11-29 18:43:19.017212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.153 [2024-11-29 18:43:19.017217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.153 [2024-11-29 18:43:19.017222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.153 [2024-11-29 18:43:19.017227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.153 [2024-11-29 18:43:19.017232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.153 [2024-11-29 18:43:19.017237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.153 [2024-11-29 18:43:19.017247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.153 [2024-11-29 18:43:19.017264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.153 [2024-11-29 18:43:19.017279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.153 [2024-11-29 18:43:19.017295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.153 [2024-11-29 18:43:19.017311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.153 [2024-11-29 18:43:19.017326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.153 [2024-11-29 18:43:19.017337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.153 [2024-11-29 18:43:19.017347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.153 [2024-11-29 18:43:19.017353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.153 [2024-11-29 18:43:19.017359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.153 [2024-11-29 18:43:19.017365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.153 [2024-11-29 18:43:19.017371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.153 [2024-11-29 18:43:19.017384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.153 [2024-11-29 18:43:19.017389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.153 [2024-11-29 18:43:19.017402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.153 [2024-11-29 18:43:19.017411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.153 [2024-11-29 18:43:19.017425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.153 [2024-11-29 18:43:19.017432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.153 [2024-11-29 18:43:19.017438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.153 [2024-11-29 18:43:19.017444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.153 [2024-11-29 18:43:19.017609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.153 [2024-11-29 18:43:19.017636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.153 [2024-11-29 18:43:19.017654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.153 [2024-11-29 18:43:19.017687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.017739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.153 [2024-11-29 18:43:19.017763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.153 [2024-11-29 18:43:19.017785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.153 [2024-11-29 18:43:19.017823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.153 [2024-11-29 18:43:19.017846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.153 [2024-11-29 18:43:19.017889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.153 [2024-11-29 18:43:19.017912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.153 [2024-11-29 18:43:19.017934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.153 [2024-11-29 18:43:19.017972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.153 [2024-11-29 18:43:19.017995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.153 [2024-11-29 18:43:19.018190] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.153 [2024-11-29 18:43:19.018217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.153 [2024-11-29 18:43:19.018261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.153 [2024-11-29 18:43:19.018282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.153 [2024-11-29 18:43:19.018303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.153 [2024-11-29 18:43:19.018354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.018371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.153 [2024-11-29 18:43:19.018390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:29:59.153 [2024-11-29 18:43:19.018404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.023724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.023801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.153 [2024-11-29 18:43:19.023839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:29:59.153 [2024-11-29 18:43:19.023847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.023916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.023923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.153 [2024-11-29 18:43:19.023930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:59.153 [2024-11-29 18:43:19.023936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.041340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.153 [2024-11-29 18:43:19.041444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.153 [2024-11-29 18:43:19.041467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.373 ms 00:29:59.153 [2024-11-29 18:43:19.041474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.153 [2024-11-29 18:43:19.041497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.154 [2024-11-29 18:43:19.041503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.154 [2024-11-29 18:43:19.041510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:59.154 [2024-11-29 18:43:19.041515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.154 [2024-11-29 18:43:19.041590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.154 [2024-11-29 18:43:19.041601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.154 [2024-11-29 18:43:19.041607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:59.154 [2024-11-29 18:43:19.041613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.154 [2024-11-29 18:43:19.041700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.154 [2024-11-29 18:43:19.041707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.154 [2024-11-29 18:43:19.041714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:29:59.154 [2024-11-29 18:43:19.041721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.154 [2024-11-29 18:43:19.047014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.154 [2024-11-29 18:43:19.047050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.154 [2024-11-29 18:43:19.047074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:29:59.154 [2024-11-29 18:43:19.047084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.154 [2024-11-29 18:43:19.047203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:59.154 [2024-11-29 18:43:19.047220] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:59.154 [2024-11-29 18:43:19.047233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.154 [2024-11-29 18:43:19.047244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:59.154 [2024-11-29 18:43:19.047255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:59.154 [2024-11-29 18:43:19.047267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.059936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.059958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:59.416 [2024-11-29 18:43:19.059966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.650 ms 00:29:59.416 [2024-11-29 18:43:19.059976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.060060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.060066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:59.416 [2024-11-29 18:43:19.060072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:59.416 [2024-11-29 18:43:19.060080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.060113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.060123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:59.416 [2024-11-29 18:43:19.060129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:59.416 [2024-11-29 18:43:19.060138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.060361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.060369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.416 [2024-11-29 18:43:19.060376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:29:59.416 [2024-11-29 18:43:19.060381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.060394] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:59.416 [2024-11-29 18:43:19.060401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.060408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:59.416 [2024-11-29 18:43:19.060414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:59.416 [2024-11-29 18:43:19.060419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.066613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.416 [2024-11-29 18:43:19.066719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.066727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.416 [2024-11-29 18:43:19.066733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.288 ms 00:29:59.416 [2024-11-29 18:43:19.066744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.068481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.068500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:59.416 [2024-11-29 18:43:19.068507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:29:59.416 [2024-11-29 18:43:19.068513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.068562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.068572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.416 [2024-11-29 18:43:19.068578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:59.416 [2024-11-29 18:43:19.068586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.068605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.068611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.416 [2024-11-29 18:43:19.068616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.416 [2024-11-29 18:43:19.068622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.068643] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:59.416 [2024-11-29 18:43:19.068650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.068656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:59.416 [2024-11-29 18:43:19.068663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:59.416 [2024-11-29 18:43:19.068668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.072409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.072517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.416 [2024-11-29 18:43:19.072529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:29:59.416 [2024-11-29 18:43:19.072535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.072604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.416 [2024-11-29 18:43:19.072617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.416 [2024-11-29 18:43:19.072623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:59.416 [2024-11-29 18:43:19.072629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.416 [2024-11-29 18:43:19.073277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 59.610 ms, result 0 00:30:00.361  [2024-11-29T18:43:21.209Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T18:43:22.598Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-29T18:43:23.542Z] Copying: 52/1024 [MB] (18 MBps) [2024-11-29T18:43:24.487Z] Copying: 68/1024 [MB] (16 MBps) [2024-11-29T18:43:25.432Z] Copying: 82/1024 [MB] (13 MBps) [2024-11-29T18:43:26.377Z] Copying: 96/1024 [MB] (14 MBps) [2024-11-29T18:43:27.316Z] Copying: 110/1024 [MB] (13 MBps) [2024-11-29T18:43:28.262Z] Copying: 124/1024 [MB] (13 MBps) [2024-11-29T18:43:29.649Z] Copying: 150/1024 [MB] (26 MBps) [2024-11-29T18:43:30.221Z] Copying: 168/1024 [MB] (18 MBps) [2024-11-29T18:43:31.607Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-29T18:43:32.552Z] Copying: 201/1024 [MB] (14 MBps) [2024-11-29T18:43:33.495Z] Copying: 213/1024 [MB] (11 MBps) [2024-11-29T18:43:34.440Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-29T18:43:35.384Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-29T18:43:36.325Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-29T18:43:37.267Z] Copying: 270/1024 [MB] (24 MBps) [2024-11-29T18:43:38.212Z] Copying: 287/1024 [MB] (17 MBps) [2024-11-29T18:43:39.599Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-29T18:43:40.542Z] Copying: 313/1024 [MB] (14 MBps) [2024-11-29T18:43:41.485Z] Copying: 327/1024 [MB] (14 MBps) [2024-11-29T18:43:42.428Z] Copying: 343/1024 [MB] (16 MBps) [2024-11-29T18:43:43.373Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-29T18:43:44.404Z] Copying: 380/1024 [MB] (20 MBps) [2024-11-29T18:43:45.349Z] Copying: 395/1024 [MB] (15 MBps) [2024-11-29T18:43:46.290Z] Copying: 415/1024 [MB] (19 MBps) [2024-11-29T18:43:47.234Z] Copying: 431/1024 [MB] (16 MBps) [2024-11-29T18:43:48.623Z] Copying: 449/1024 [MB] (18 MBps) [2024-11-29T18:43:49.568Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-29T18:43:50.512Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-29T18:43:51.456Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-29T18:43:52.401Z] Copying: 496/1024 [MB] (15 MBps) [2024-11-29T18:43:53.347Z] Copying: 511/1024 [MB] (14 MBps) [2024-11-29T18:43:54.292Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-29T18:43:55.243Z] Copying: 535/1024 [MB] (12 MBps) [2024-11-29T18:43:56.627Z] Copying: 549/1024 [MB] (14 MBps) [2024-11-29T18:43:57.571Z] Copying: 563/1024 [MB] (13 MBps) [2024-11-29T18:43:58.512Z] Copying: 578/1024 [MB] (15 MBps) [2024-11-29T18:43:59.453Z] Copying: 597/1024 [MB] (18 MBps) [2024-11-29T18:44:00.393Z] Copying: 619/1024 [MB] (22 MBps) [2024-11-29T18:44:01.333Z] Copying: 634/1024 [MB] (15 MBps) [2024-11-29T18:44:02.276Z] Copying: 652/1024 [MB] (18 MBps) [2024-11-29T18:44:03.220Z] Copying: 667/1024 [MB] (14 MBps) [2024-11-29T18:44:04.608Z] Copying: 686/1024 [MB] (18 MBps) [2024-11-29T18:44:05.553Z] Copying: 714/1024 [MB] (28 MBps) [2024-11-29T18:44:06.498Z] Copying: 726/1024 [MB] (12 MBps) [2024-11-29T18:44:07.444Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-29T18:44:08.389Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-29T18:44:09.334Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-29T18:44:10.279Z] Copying: 774/1024 [MB] (14 MBps) [2024-11-29T18:44:11.225Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-29T18:44:12.613Z] Copying: 800/1024 [MB] (15 MBps) [2024-11-29T18:44:13.272Z] Copying: 814/1024 [MB] (13 MBps) [2024-11-29T18:44:14.217Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-29T18:44:15.615Z] Copying: 843/1024 [MB] (19 MBps) [2024-11-29T18:44:16.559Z] Copying: 860/1024 [MB] (16 MBps) [2024-11-29T18:44:17.503Z] Copying: 886/1024 [MB] (25 MBps) [2024-11-29T18:44:18.448Z] Copying: 902/1024 [MB] (15 MBps) [2024-11-29T18:44:19.390Z] Copying: 914/1024 [MB] (12 MBps) [2024-11-29T18:44:20.333Z] Copying: 934/1024 [MB] (20 MBps) [2024-11-29T18:44:21.276Z] Copying: 950/1024 [MB] (15 MBps) [2024-11-29T18:44:22.221Z] Copying: 965/1024 [MB] (15 MBps) [2024-11-29T18:44:23.609Z] Copying: 987/1024 [MB] (21 MBps) [2024-11-29T18:44:24.181Z] Copying: 1005/1024 [MB] (17 MBps) [2024-11-29T18:44:24.756Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 18:44:24.481288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.851 [2024-11-29 18:44:24.481392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:04.851 [2024-11-29 18:44:24.481412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:04.851 [2024-11-29 18:44:24.481423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.851 [2024-11-29 18:44:24.481474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:04.851 [2024-11-29 18:44:24.482481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.851 [2024-11-29 18:44:24.482531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:04.851 [2024-11-29 18:44:24.482545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:31:04.851 [2024-11-29 18:44:24.482555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.851 [2024-11-29 18:44:24.482835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.851 [2024-11-29 18:44:24.482849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:04.851 [2024-11-29 18:44:24.482860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:31:04.851 [2024-11-29 18:44:24.482869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.851 [2024-11-29 18:44:24.482907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.851 [2024-11-29 18:44:24.482919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:04.851 [2024-11-29 18:44:24.482928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:04.851 [2024-11-29 18:44:24.482936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.851 [2024-11-29 18:44:24.483006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.851 [2024-11-29 18:44:24.483018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:04.851 [2024-11-29 18:44:24.483028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:04.851 [2024-11-29 18:44:24.483037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.851 [2024-11-29 18:44:24.483053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:04.851 [2024-11-29 18:44:24.483071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:04.851 [2024-11-29 18:44:24.483620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:04.852 [2024-11-29 18:44:24.483978] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:04.852 [2024-11-29 18:44:24.483987] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 941bac55-3541-4ffc-a49c-047f115f854f 00:31:04.852 [2024-11-29 18:44:24.483997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:04.852 [2024-11-29 18:44:24.484010] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:04.852 [2024-11-29 18:44:24.484022] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:04.852 [2024-11-29 18:44:24.484036] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:04.852 [2024-11-29 18:44:24.484044] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:04.852 [2024-11-29 18:44:24.484053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:04.852 [2024-11-29 18:44:24.484061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:04.852 [2024-11-29 18:44:24.484069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:04.852 [2024-11-29 18:44:24.484076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:04.852 [2024-11-29 18:44:24.484084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.852 [2024-11-29 18:44:24.484093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:04.852 [2024-11-29 18:44:24.484102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:31:04.852 [2024-11-29 18:44:24.484117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.487291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.852 [2024-11-29 18:44:24.487331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:04.852 [2024-11-29 18:44:24.487342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:31:04.852 [2024-11-29 18:44:24.487351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.487538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.852 [2024-11-29 18:44:24.487550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:04.852 [2024-11-29 18:44:24.487568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:31:04.852 [2024-11-29 18:44:24.487576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.498871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.498930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:04.852 [2024-11-29 18:44:24.498942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.498951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.499030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.499047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:04.852 [2024-11-29 18:44:24.499060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.499070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.499153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.499167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:04.852 [2024-11-29 18:44:24.499176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.499186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.499210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.499221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:04.852 [2024-11-29 18:44:24.499230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.499241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.520366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.520420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:04.852 [2024-11-29 18:44:24.520432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.520443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.537748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.538085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:04.852 [2024-11-29 18:44:24.538119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.538136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.538210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.538222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:04.852 [2024-11-29 18:44:24.538231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.538240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.538283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.538293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:04.852 [2024-11-29 18:44:24.538303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.538323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.538397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.538408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:04.852 [2024-11-29 18:44:24.538417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.538426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.852 [2024-11-29 18:44:24.538476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.852 [2024-11-29 18:44:24.538488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:04.852 [2024-11-29 18:44:24.538498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.852 [2024-11-29 18:44:24.538507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.853 [2024-11-29 18:44:24.538565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.853 [2024-11-29 18:44:24.538578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:04.853 [2024-11-29 18:44:24.538588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.853 [2024-11-29 18:44:24.538598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.853 [2024-11-29 18:44:24.538659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.853 [2024-11-29 18:44:24.538673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:04.853 [2024-11-29 18:44:24.538684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.853 [2024-11-29 18:44:24.538698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.853 [2024-11-29 18:44:24.538876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.539 ms, result 0 00:31:05.114 00:31:05.114 00:31:05.114 18:44:24 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:07.658 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:07.658 18:44:27 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:07.658 [2024-11-29 18:44:27.145161] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:31:07.658 [2024-11-29 18:44:27.145330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95814 ] 00:31:07.658 [2024-11-29 18:44:27.305378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.658 [2024-11-29 18:44:27.345363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.658 [2024-11-29 18:44:27.494954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:07.658 [2024-11-29 18:44:27.495055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:07.921 [2024-11-29 18:44:27.659949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.921 [2024-11-29 18:44:27.660014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:07.921 [2024-11-29 18:44:27.660032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:07.921 [2024-11-29 18:44:27.660042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.921 [2024-11-29 18:44:27.660114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.921 [2024-11-29 18:44:27.660131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:07.921 [2024-11-29 18:44:27.660141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:07.921 [2024-11-29 18:44:27.660150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.921 [2024-11-29 18:44:27.660178] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:07.921 [2024-11-29 18:44:27.660482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:07.921 [2024-11-29 18:44:27.660509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.660519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:07.922 [2024-11-29 18:44:27.660531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:31:07.922 [2024-11-29 18:44:27.660540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.660840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:07.922 [2024-11-29 18:44:27.660876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.660885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:07.922 [2024-11-29 18:44:27.660895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:07.922 [2024-11-29 18:44:27.660911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.660978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.660989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:07.922 [2024-11-29 18:44:27.660999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:07.922 [2024-11-29 18:44:27.661011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.661309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.661324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:07.922 [2024-11-29 18:44:27.661334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:31:07.922 [2024-11-29 18:44:27.661345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.661438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.661472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:07.922 [2024-11-29 18:44:27.661482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:31:07.922 [2024-11-29 18:44:27.661491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.661516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.661526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:07.922 [2024-11-29 18:44:27.661536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:07.922 [2024-11-29 18:44:27.661546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.661575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:07.922 [2024-11-29 18:44:27.664379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.664426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:07.922 [2024-11-29 18:44:27.664438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:31:07.922 [2024-11-29 18:44:27.664447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.664499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.664513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:07.922 [2024-11-29 18:44:27.664522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:07.922 [2024-11-29 18:44:27.664530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.664580] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:07.922 [2024-11-29 18:44:27.664612] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:07.922 [2024-11-29 18:44:27.664650] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:07.922 [2024-11-29 18:44:27.664673] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:07.922 [2024-11-29 18:44:27.664784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:07.922 [2024-11-29 18:44:27.664796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:07.922 [2024-11-29 18:44:27.664809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:07.922 [2024-11-29 18:44:27.664828] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:07.922 [2024-11-29 18:44:27.664841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:07.922 [2024-11-29 18:44:27.664854] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:07.922 [2024-11-29 18:44:27.664862] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:07.922 [2024-11-29 18:44:27.664870] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:07.922 [2024-11-29 18:44:27.664879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:07.922 [2024-11-29 18:44:27.664888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.664897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:07.922 [2024-11-29 18:44:27.664905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:07.922 [2024-11-29 18:44:27.664913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.664995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.922 [2024-11-29 18:44:27.665007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:07.922 [2024-11-29 18:44:27.665016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:07.922 [2024-11-29 18:44:27.665024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.922 [2024-11-29 18:44:27.665129] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:07.922 [2024-11-29 18:44:27.665142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:07.922 [2024-11-29 18:44:27.665160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:07.922 [2024-11-29 18:44:27.665197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:07.922 [2024-11-29 18:44:27.665226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.922 [2024-11-29 18:44:27.665244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:07.922 [2024-11-29 18:44:27.665254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:07.922 [2024-11-29 18:44:27.665263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.922 [2024-11-29 18:44:27.665271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:07.922 [2024-11-29 18:44:27.665280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:07.922 [2024-11-29 18:44:27.665287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:07.922 [2024-11-29 18:44:27.665307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:07.922 [2024-11-29 18:44:27.665334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:07.922 [2024-11-29 18:44:27.665358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:07.922 [2024-11-29 18:44:27.665384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:07.922 [2024-11-29 18:44:27.665405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:07.922 [2024-11-29 18:44:27.665427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.922 [2024-11-29 18:44:27.665446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:07.922 [2024-11-29 18:44:27.665467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:07.922 [2024-11-29 18:44:27.665475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.922 [2024-11-29 18:44:27.665482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:07.922 [2024-11-29 18:44:27.665491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:07.922 [2024-11-29 18:44:27.665499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:07.922 [2024-11-29 18:44:27.665512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:07.922 [2024-11-29 18:44:27.665520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665529] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:07.922 [2024-11-29 18:44:27.665538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:07.922 [2024-11-29 18:44:27.665546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.922 [2024-11-29 18:44:27.665557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.922 [2024-11-29 18:44:27.665568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:07.922 [2024-11-29 18:44:27.665578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:07.922 [2024-11-29 18:44:27.665587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:07.923 [2024-11-29 18:44:27.665597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:07.923 [2024-11-29 18:44:27.665605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:07.923 [2024-11-29 18:44:27.665612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:07.923 [2024-11-29 18:44:27.665634] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:07.923 [2024-11-29 18:44:27.665644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:07.923 [2024-11-29 18:44:27.665662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:07.923 [2024-11-29 18:44:27.665672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:07.923 [2024-11-29 18:44:27.665680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:07.923 [2024-11-29 18:44:27.665687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:07.923 [2024-11-29 18:44:27.665694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:07.923 [2024-11-29 18:44:27.665702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:07.923 [2024-11-29 18:44:27.665709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:07.923 [2024-11-29 18:44:27.665716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:07.923 [2024-11-29 18:44:27.665726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:07.923 [2024-11-29 18:44:27.665773] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:07.923 [2024-11-29 18:44:27.665782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:07.923 [2024-11-29 18:44:27.665798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:07.923 [2024-11-29 18:44:27.665806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:07.923 [2024-11-29 18:44:27.665813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:07.923 [2024-11-29 18:44:27.665823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.665832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:07.923 [2024-11-29 18:44:27.665840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:31:07.923 [2024-11-29 18:44:27.665848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.679721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.679767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:07.923 [2024-11-29 18:44:27.679780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.828 ms 00:31:07.923 [2024-11-29 18:44:27.679788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.679877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.679886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:07.923 [2024-11-29 18:44:27.679896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:07.923 [2024-11-29 18:44:27.679904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.711269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.711328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:07.923 [2024-11-29 18:44:27.711343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.302 ms 00:31:07.923 [2024-11-29 18:44:27.711358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.711407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.711422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:07.923 [2024-11-29 18:44:27.711435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:07.923 [2024-11-29 18:44:27.711444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.711586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.711611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:07.923 [2024-11-29 18:44:27.711621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:07.923 [2024-11-29 18:44:27.711632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.711774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.711791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:07.923 [2024-11-29 18:44:27.711802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:31:07.923 [2024-11-29 18:44:27.711810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.722765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.722812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:07.923 [2024-11-29 18:44:27.722840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:31:07.923 [2024-11-29 18:44:27.722849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.722993] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:07.923 [2024-11-29 18:44:27.723009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:07.923 [2024-11-29 18:44:27.723020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.723029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:07.923 [2024-11-29 18:44:27.723038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:07.923 [2024-11-29 18:44:27.723050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.735407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.735467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:07.923 [2024-11-29 18:44:27.735480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.333 ms 00:31:07.923 [2024-11-29 18:44:27.735490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.735632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.735652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:07.923 [2024-11-29 18:44:27.735661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:07.923 [2024-11-29 18:44:27.735673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.735724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.735740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:07.923 [2024-11-29 18:44:27.735749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:07.923 [2024-11-29 18:44:27.735757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.736107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.736137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:07.923 [2024-11-29 18:44:27.736147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:31:07.923 [2024-11-29 18:44:27.736155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.736173] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:07.923 [2024-11-29 18:44:27.736186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.736198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:07.923 [2024-11-29 18:44:27.736207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:07.923 [2024-11-29 18:44:27.736216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.747113] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:07.923 [2024-11-29 18:44:27.747274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.747286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:07.923 [2024-11-29 18:44:27.747298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.039 ms 00:31:07.923 [2024-11-29 18:44:27.747313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.749977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.750276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:07.923 [2024-11-29 18:44:27.750305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:31:07.923 [2024-11-29 18:44:27.750314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.750429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.750442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:07.923 [2024-11-29 18:44:27.750477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:07.923 [2024-11-29 18:44:27.750491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.750521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.923 [2024-11-29 18:44:27.750530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:07.923 [2024-11-29 18:44:27.750540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:07.923 [2024-11-29 18:44:27.750555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.923 [2024-11-29 18:44:27.750599] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:07.924 [2024-11-29 18:44:27.750616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.924 [2024-11-29 18:44:27.750626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:07.924 [2024-11-29 18:44:27.750634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:07.924 [2024-11-29 18:44:27.750643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.924 [2024-11-29 18:44:27.757532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.924 [2024-11-29 18:44:27.757602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:07.924 [2024-11-29 18:44:27.757615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:31:07.924 [2024-11-29 18:44:27.757623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.924 [2024-11-29 18:44:27.757715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.924 [2024-11-29 18:44:27.757727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:07.924 [2024-11-29 18:44:27.757741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:07.924 [2024-11-29 18:44:27.757751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.924 [2024-11-29 18:44:27.759117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.660 ms, result 0 00:31:08.867  [2024-11-29T18:44:30.160Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T18:44:31.103Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-29T18:44:32.046Z] Copying: 46/1024 [MB] (12 MBps) [2024-11-29T18:44:32.988Z] Copying: 57/1024 [MB] (10 MBps) [2024-11-29T18:44:33.932Z] Copying: 69/1024 [MB] (12 MBps) [2024-11-29T18:44:34.872Z] Copying: 94/1024 [MB] (24 MBps) [2024-11-29T18:44:35.834Z] Copying: 108/1024 [MB] (14 MBps) [2024-11-29T18:44:36.777Z] Copying: 132/1024 [MB] (23 MBps) [2024-11-29T18:44:38.165Z] Copying: 147/1024 [MB] (14 MBps) [2024-11-29T18:44:39.106Z] Copying: 164/1024 [MB] (17 MBps) [2024-11-29T18:44:40.049Z] Copying: 195/1024 [MB] (31 MBps) [2024-11-29T18:44:41.089Z] Copying: 219/1024 [MB] (23 MBps) [2024-11-29T18:44:42.099Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-29T18:44:43.043Z] Copying: 253/1024 [MB] (16 MBps) [2024-11-29T18:44:43.987Z] Copying: 271/1024 [MB] (18 MBps) [2024-11-29T18:44:44.932Z] Copying: 293/1024 [MB] (21 MBps) [2024-11-29T18:44:45.873Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-29T18:44:46.816Z] Copying: 334/1024 [MB] (25 MBps) [2024-11-29T18:44:48.204Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-29T18:44:48.777Z] Copying: 355/1024 [MB] (10 MBps) [2024-11-29T18:44:50.165Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-29T18:44:51.108Z] Copying: 380/1024 [MB] (13 MBps) [2024-11-29T18:44:52.048Z] Copying: 393/1024 [MB] (13 MBps) [2024-11-29T18:44:52.989Z] Copying: 413024/1048576 [kB] (10200 kBps) [2024-11-29T18:44:53.934Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-29T18:44:54.880Z] Copying: 433484/1048576 [kB] (10200 kBps) [2024-11-29T18:44:55.821Z] Copying: 443696/1048576 [kB] (10212 kBps) [2024-11-29T18:44:57.207Z] Copying: 447/1024 [MB] (14 MBps) [2024-11-29T18:44:57.774Z] Copying: 468888/1048576 [kB] (10184 kBps) [2024-11-29T18:44:59.148Z] Copying: 469/1024 [MB] (11 MBps) [2024-11-29T18:45:00.090Z] Copying: 481/1024 [MB] (12 MBps) [2024-11-29T18:45:01.034Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-29T18:45:01.975Z] Copying: 504/1024 [MB] (10 MBps) [2024-11-29T18:45:02.918Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-29T18:45:03.862Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-29T18:45:04.806Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-29T18:45:06.190Z] Copying: 546/1024 [MB] (10 MBps) [2024-11-29T18:45:07.133Z] Copying: 556/1024 [MB] (10 MBps) [2024-11-29T18:45:08.077Z] Copying: 577/1024 [MB] (20 MBps) [2024-11-29T18:45:09.020Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-29T18:45:09.961Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-29T18:45:10.902Z] Copying: 624/1024 [MB] (13 MBps) [2024-11-29T18:45:11.841Z] Copying: 667/1024 [MB] (42 MBps) [2024-11-29T18:45:12.784Z] Copying: 688/1024 [MB] (21 MBps) [2024-11-29T18:45:14.172Z] Copying: 708/1024 [MB] (19 MBps) [2024-11-29T18:45:15.116Z] Copying: 727/1024 [MB] (19 MBps) [2024-11-29T18:45:16.056Z] Copying: 758/1024 [MB] (30 MBps) [2024-11-29T18:45:17.002Z] Copying: 787/1024 [MB] (29 MBps) [2024-11-29T18:45:17.947Z] Copying: 808/1024 [MB] (21 MBps) [2024-11-29T18:45:18.891Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-29T18:45:19.838Z] Copying: 846/1024 [MB] (17 MBps) [2024-11-29T18:45:20.781Z] Copying: 861/1024 [MB] (14 MBps) [2024-11-29T18:45:22.171Z] Copying: 875/1024 [MB] (13 MBps) [2024-11-29T18:45:23.133Z] Copying: 890/1024 [MB] (15 MBps) [2024-11-29T18:45:24.123Z] Copying: 904/1024 [MB] (13 MBps) [2024-11-29T18:45:25.069Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-29T18:45:26.012Z] Copying: 928/1024 [MB] (12 MBps) [2024-11-29T18:45:26.956Z] Copying: 941/1024 [MB] (12 MBps) [2024-11-29T18:45:27.901Z] Copying: 956/1024 [MB] (15 MBps) [2024-11-29T18:45:28.845Z] Copying: 972/1024 [MB] (15 MBps) [2024-11-29T18:45:29.790Z] Copying: 990/1024 [MB] (17 MBps) [2024-11-29T18:45:31.177Z] Copying: 1009/1024 [MB] (19 MBps) [2024-11-29T18:45:31.752Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-29T18:45:31.752Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 18:45:31.564846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.847 [2024-11-29 18:45:31.565066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.847 [2024-11-29 18:45:31.565150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:11.847 [2024-11-29 18:45:31.565177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.847 [2024-11-29 18:45:31.566646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.847 [2024-11-29 18:45:31.568315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.847 [2024-11-29 18:45:31.568515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.847 [2024-11-29 18:45:31.568632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:32:11.847 [2024-11-29 18:45:31.568661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.847 [2024-11-29 18:45:31.581536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.847 [2024-11-29 18:45:31.581741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.847 [2024-11-29 18:45:31.581864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.648 ms 00:32:11.847 [2024-11-29 18:45:31.581891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.847 [2024-11-29 18:45:31.581941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.847 [2024-11-29 18:45:31.581965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:11.847 [2024-11-29 18:45:31.581986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:11.847 [2024-11-29 18:45:31.582021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.847 [2024-11-29 18:45:31.582100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.847 [2024-11-29 18:45:31.582127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:11.847 [2024-11-29 18:45:31.582149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:11.847 [2024-11-29 18:45:31.582231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.847 [2024-11-29 18:45:31.582263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.847 [2024-11-29 18:45:31.582290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:32:11.847 [2024-11-29 18:45:31.582333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.582998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.847 [2024-11-29 18:45:31.583053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.848 [2024-11-29 18:45:31.583348] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.848 [2024-11-29 18:45:31.583358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 941bac55-3541-4ffc-a49c-047f115f854f 00:32:11.848 [2024-11-29 18:45:31.583366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:32:11.848 [2024-11-29 18:45:31.583374] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127776 00:32:11.848 [2024-11-29 18:45:31.583387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:32:11.848 [2024-11-29 18:45:31.583410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:11.848 [2024-11-29 18:45:31.583421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.848 [2024-11-29 18:45:31.583429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.848 [2024-11-29 18:45:31.583437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.848 [2024-11-29 18:45:31.583444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.848 [2024-11-29 18:45:31.583463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.848 [2024-11-29 18:45:31.583472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.848 [2024-11-29 18:45:31.583481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.848 [2024-11-29 18:45:31.583490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:32:11.848 [2024-11-29 18:45:31.583498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.586156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.848 [2024-11-29 18:45:31.586193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.848 [2024-11-29 18:45:31.586213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:32:11.848 [2024-11-29 18:45:31.586222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.586351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.848 [2024-11-29 18:45:31.586361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.848 [2024-11-29 18:45:31.586370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:32:11.848 [2024-11-29 18:45:31.586378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.594427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.594513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.848 [2024-11-29 18:45:31.594525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.594533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.594601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.594611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.848 [2024-11-29 18:45:31.594619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.594627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.594666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.594681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.848 [2024-11-29 18:45:31.594690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.594698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.594714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.594723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.848 [2024-11-29 18:45:31.594731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.594743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.609531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.609591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.848 [2024-11-29 18:45:31.609603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.609612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.620878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.848 [2024-11-29 18:45:31.621107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.621167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.848 [2024-11-29 18:45:31.621194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.621237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.848 [2024-11-29 18:45:31.621255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.621336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.848 [2024-11-29 18:45:31.621359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.621396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:11.848 [2024-11-29 18:45:31.621414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.848 [2024-11-29 18:45:31.621509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.848 [2024-11-29 18:45:31.621520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.848 [2024-11-29 18:45:31.621533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.848 [2024-11-29 18:45:31.621541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.849 [2024-11-29 18:45:31.621596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.849 [2024-11-29 18:45:31.621608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.849 [2024-11-29 18:45:31.621617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.849 [2024-11-29 18:45:31.621627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.849 [2024-11-29 18:45:31.621761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.803 ms, result 0 00:32:12.792 00:32:12.792 00:32:12.792 18:45:32 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:12.792 [2024-11-29 18:45:32.628559] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:32:12.792 [2024-11-29 18:45:32.628953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96489 ] 00:32:13.053 [2024-11-29 18:45:32.790173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.053 [2024-11-29 18:45:32.820378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.054 [2024-11-29 18:45:32.933357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.054 [2024-11-29 18:45:32.933435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.317 [2024-11-29 18:45:33.095715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.095783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:13.317 [2024-11-29 18:45:33.095802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:13.317 [2024-11-29 18:45:33.095811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.095871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.095882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.317 [2024-11-29 18:45:33.095891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:13.317 [2024-11-29 18:45:33.095899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.095923] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:13.317 [2024-11-29 18:45:33.096212] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:13.317 [2024-11-29 18:45:33.096230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.096242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.317 [2024-11-29 18:45:33.096258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:32:13.317 [2024-11-29 18:45:33.096266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.096587] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:13.317 [2024-11-29 18:45:33.096637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.096646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:13.317 [2024-11-29 18:45:33.096656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:13.317 [2024-11-29 18:45:33.096668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.096731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.096742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:13.317 [2024-11-29 18:45:33.096751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:13.317 [2024-11-29 18:45:33.096759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.097062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.097075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.317 [2024-11-29 18:45:33.097084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:32:13.317 [2024-11-29 18:45:33.097096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.097181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.097205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.317 [2024-11-29 18:45:33.097215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:32:13.317 [2024-11-29 18:45:33.097222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.097246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.097259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:13.317 [2024-11-29 18:45:33.097267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:13.317 [2024-11-29 18:45:33.097275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.097301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:13.317 [2024-11-29 18:45:33.099587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.099639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.317 [2024-11-29 18:45:33.099655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:32:13.317 [2024-11-29 18:45:33.099664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.099705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.099719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:13.317 [2024-11-29 18:45:33.099734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:13.317 [2024-11-29 18:45:33.099744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.099801] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:13.317 [2024-11-29 18:45:33.099829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:13.317 [2024-11-29 18:45:33.099866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:13.317 [2024-11-29 18:45:33.099884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:13.317 [2024-11-29 18:45:33.099991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:13.317 [2024-11-29 18:45:33.100003] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:13.317 [2024-11-29 18:45:33.100016] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:13.317 [2024-11-29 18:45:33.100027] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:13.317 [2024-11-29 18:45:33.100041] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:13.317 [2024-11-29 18:45:33.100050] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:13.317 [2024-11-29 18:45:33.100057] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:13.317 [2024-11-29 18:45:33.100064] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:13.317 [2024-11-29 18:45:33.100076] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:13.317 [2024-11-29 18:45:33.100084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.100094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:13.317 [2024-11-29 18:45:33.100102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:32:13.317 [2024-11-29 18:45:33.100111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.100202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.317 [2024-11-29 18:45:33.100214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:13.317 [2024-11-29 18:45:33.100222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:13.317 [2024-11-29 18:45:33.100229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.317 [2024-11-29 18:45:33.100335] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:13.317 [2024-11-29 18:45:33.100346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:13.317 [2024-11-29 18:45:33.100354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.317 [2024-11-29 18:45:33.100363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.317 [2024-11-29 18:45:33.100371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:13.318 [2024-11-29 18:45:33.100384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:13.318 [2024-11-29 18:45:33.100399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:13.318 [2024-11-29 18:45:33.100409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.318 [2024-11-29 18:45:33.100423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:13.318 [2024-11-29 18:45:33.100431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:13.318 [2024-11-29 18:45:33.100438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.318 [2024-11-29 18:45:33.100448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:13.318 [2024-11-29 18:45:33.100731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:13.318 [2024-11-29 18:45:33.100758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:13.318 [2024-11-29 18:45:33.100798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:13.318 [2024-11-29 18:45:33.100816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:13.318 [2024-11-29 18:45:33.100853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.318 [2024-11-29 18:45:33.100889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:13.318 [2024-11-29 18:45:33.100908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.318 [2024-11-29 18:45:33.100953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:13.318 [2024-11-29 18:45:33.100970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:13.318 [2024-11-29 18:45:33.100989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.318 [2024-11-29 18:45:33.101071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:13.318 [2024-11-29 18:45:33.101094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:13.318 [2024-11-29 18:45:33.101113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.318 [2024-11-29 18:45:33.101131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:13.318 [2024-11-29 18:45:33.101149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:13.318 [2024-11-29 18:45:33.101169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.318 [2024-11-29 18:45:33.101189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:13.318 [2024-11-29 18:45:33.101207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:13.318 [2024-11-29 18:45:33.101225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.318 [2024-11-29 18:45:33.101244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:13.318 [2024-11-29 18:45:33.101262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:13.318 [2024-11-29 18:45:33.101280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.101303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:13.318 [2024-11-29 18:45:33.101322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:13.318 [2024-11-29 18:45:33.101339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.101399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:13.318 [2024-11-29 18:45:33.101423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:13.318 [2024-11-29 18:45:33.101443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.318 [2024-11-29 18:45:33.101485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.318 [2024-11-29 18:45:33.101507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:13.318 [2024-11-29 18:45:33.101525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:13.318 [2024-11-29 18:45:33.101544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:13.318 [2024-11-29 18:45:33.101563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:13.318 [2024-11-29 18:45:33.101581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:13.318 [2024-11-29 18:45:33.101599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:13.318 [2024-11-29 18:45:33.101620] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:13.318 [2024-11-29 18:45:33.101652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.101682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:13.318 [2024-11-29 18:45:33.101761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:13.318 [2024-11-29 18:45:33.101794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:13.318 [2024-11-29 18:45:33.101823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:13.318 [2024-11-29 18:45:33.101852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:13.318 [2024-11-29 18:45:33.101880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:13.318 [2024-11-29 18:45:33.101908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:13.318 [2024-11-29 18:45:33.101917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:13.318 [2024-11-29 18:45:33.101925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:13.318 [2024-11-29 18:45:33.101932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.101939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.101948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.101956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.101964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:13.318 [2024-11-29 18:45:33.101972] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:13.318 [2024-11-29 18:45:33.101981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.102006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:13.318 [2024-11-29 18:45:33.102017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:13.318 [2024-11-29 18:45:33.102026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:13.318 [2024-11-29 18:45:33.102034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:13.318 [2024-11-29 18:45:33.102044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.102052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:13.318 [2024-11-29 18:45:33.102061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:32:13.318 [2024-11-29 18:45:33.102068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.112402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.112625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.318 [2024-11-29 18:45:33.112646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.271 ms 00:32:13.318 [2024-11-29 18:45:33.112655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.112760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.112769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:13.318 [2024-11-29 18:45:33.112777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:13.318 [2024-11-29 18:45:33.112785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.134953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.135197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.318 [2024-11-29 18:45:33.135236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.104 ms 00:32:13.318 [2024-11-29 18:45:33.135255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.135319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.135348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.318 [2024-11-29 18:45:33.135363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:13.318 [2024-11-29 18:45:33.135374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.135562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.135586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.318 [2024-11-29 18:45:33.135600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:13.318 [2024-11-29 18:45:33.135612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.135801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.135817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.318 [2024-11-29 18:45:33.135837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:32:13.318 [2024-11-29 18:45:33.135851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.318 [2024-11-29 18:45:33.144312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.318 [2024-11-29 18:45:33.144361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.319 [2024-11-29 18:45:33.144386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.432 ms 00:32:13.319 [2024-11-29 18:45:33.144398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.144538] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:13.319 [2024-11-29 18:45:33.144552] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:13.319 [2024-11-29 18:45:33.144562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.144571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:13.319 [2024-11-29 18:45:33.144581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:13.319 [2024-11-29 18:45:33.144592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.157042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.157090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:13.319 [2024-11-29 18:45:33.157103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.427 ms 00:32:13.319 [2024-11-29 18:45:33.157112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.157253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.157264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:13.319 [2024-11-29 18:45:33.157273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:13.319 [2024-11-29 18:45:33.157292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.157347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.157361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:13.319 [2024-11-29 18:45:33.157370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:13.319 [2024-11-29 18:45:33.157379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.157769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.157784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:13.319 [2024-11-29 18:45:33.157793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:32:13.319 [2024-11-29 18:45:33.157801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.157820] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:13.319 [2024-11-29 18:45:33.157830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.157842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:13.319 [2024-11-29 18:45:33.157851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:13.319 [2024-11-29 18:45:33.157861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.167392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:13.319 [2024-11-29 18:45:33.167587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.167600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:13.319 [2024-11-29 18:45:33.167610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.701 ms 00:32:13.319 [2024-11-29 18:45:33.167622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.170228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.170270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:13.319 [2024-11-29 18:45:33.170280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:32:13.319 [2024-11-29 18:45:33.170289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.170374] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:13.319 [2024-11-29 18:45:33.171001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.171020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:13.319 [2024-11-29 18:45:33.171033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:32:13.319 [2024-11-29 18:45:33.171041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.171069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.171078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:13.319 [2024-11-29 18:45:33.171086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:13.319 [2024-11-29 18:45:33.171097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.171136] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:13.319 [2024-11-29 18:45:33.171145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.171153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:13.319 [2024-11-29 18:45:33.171161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:13.319 [2024-11-29 18:45:33.171171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.178052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.178106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:13.319 [2024-11-29 18:45:33.178118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:32:13.319 [2024-11-29 18:45:33.178126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.178223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-29 18:45:33.178237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:13.319 [2024-11-29 18:45:33.178246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:13.319 [2024-11-29 18:45:33.178254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-29 18:45:33.179666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.462 ms, result 0 00:32:14.708  [2024-11-29T18:45:35.557Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T18:45:36.501Z] Copying: 26/1024 [MB] (14 MBps) [2024-11-29T18:45:37.445Z] Copying: 48/1024 [MB] (21 MBps) [2024-11-29T18:45:38.390Z] Copying: 63/1024 [MB] (15 MBps) [2024-11-29T18:45:39.781Z] Copying: 82/1024 [MB] (18 MBps) [2024-11-29T18:45:40.725Z] Copying: 99/1024 [MB] (16 MBps) [2024-11-29T18:45:41.670Z] Copying: 115/1024 [MB] (15 MBps) [2024-11-29T18:45:42.615Z] Copying: 133/1024 [MB] (17 MBps) [2024-11-29T18:45:43.559Z] Copying: 145/1024 [MB] (12 MBps) [2024-11-29T18:45:44.503Z] Copying: 159/1024 [MB] (13 MBps) [2024-11-29T18:45:45.447Z] Copying: 172/1024 [MB] (13 MBps) [2024-11-29T18:45:46.389Z] Copying: 186/1024 [MB] (13 MBps) [2024-11-29T18:45:47.775Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-29T18:45:48.719Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-29T18:45:49.664Z] Copying: 222/1024 [MB] (13 MBps) [2024-11-29T18:45:50.609Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-29T18:45:51.553Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-29T18:45:52.495Z] Copying: 263/1024 [MB] (19 MBps) [2024-11-29T18:45:53.438Z] Copying: 273/1024 [MB] (10 MBps) [2024-11-29T18:45:54.429Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-29T18:45:55.412Z] Copying: 300/1024 [MB] (16 MBps) [2024-11-29T18:45:56.797Z] Copying: 313/1024 [MB] (13 MBps) [2024-11-29T18:45:57.744Z] Copying: 344/1024 [MB] (30 MBps) [2024-11-29T18:45:58.689Z] Copying: 362/1024 [MB] (18 MBps) [2024-11-29T18:45:59.633Z] Copying: 386/1024 [MB] (23 MBps) [2024-11-29T18:46:00.576Z] Copying: 412/1024 [MB] (26 MBps) [2024-11-29T18:46:01.517Z] Copying: 444/1024 [MB] (32 MBps) [2024-11-29T18:46:02.460Z] Copying: 465/1024 [MB] (20 MBps) [2024-11-29T18:46:03.405Z] Copying: 483/1024 [MB] (18 MBps) [2024-11-29T18:46:04.792Z] Copying: 509/1024 [MB] (25 MBps) [2024-11-29T18:46:05.734Z] Copying: 520/1024 [MB] (10 MBps) [2024-11-29T18:46:06.679Z] Copying: 532/1024 [MB] (12 MBps) [2024-11-29T18:46:07.624Z] Copying: 550/1024 [MB] (18 MBps) [2024-11-29T18:46:08.569Z] Copying: 567/1024 [MB] (16 MBps) [2024-11-29T18:46:09.513Z] Copying: 590/1024 [MB] (23 MBps) [2024-11-29T18:46:10.459Z] Copying: 612/1024 [MB] (21 MBps) [2024-11-29T18:46:11.405Z] Copying: 632/1024 [MB] (20 MBps) [2024-11-29T18:46:12.794Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-29T18:46:13.738Z] Copying: 663/1024 [MB] (19 MBps) [2024-11-29T18:46:14.681Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-29T18:46:15.625Z] Copying: 705/1024 [MB] (22 MBps) [2024-11-29T18:46:16.568Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-29T18:46:17.512Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-29T18:46:18.455Z] Copying: 750/1024 [MB] (16 MBps) [2024-11-29T18:46:19.395Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-29T18:46:20.778Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-29T18:46:21.722Z] Copying: 783/1024 [MB] (10 MBps) [2024-11-29T18:46:22.669Z] Copying: 794/1024 [MB] (10 MBps) [2024-11-29T18:46:23.615Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-29T18:46:24.559Z] Copying: 830/1024 [MB] (25 MBps) [2024-11-29T18:46:25.505Z] Copying: 846/1024 [MB] (15 MBps) [2024-11-29T18:46:26.448Z] Copying: 863/1024 [MB] (17 MBps) [2024-11-29T18:46:27.392Z] Copying: 884/1024 [MB] (20 MBps) [2024-11-29T18:46:28.781Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-29T18:46:29.726Z] Copying: 912/1024 [MB] (17 MBps) [2024-11-29T18:46:30.679Z] Copying: 933/1024 [MB] (20 MBps) [2024-11-29T18:46:31.709Z] Copying: 956/1024 [MB] (23 MBps) [2024-11-29T18:46:32.653Z] Copying: 976/1024 [MB] (19 MBps) [2024-11-29T18:46:33.597Z] Copying: 994/1024 [MB] (18 MBps) [2024-11-29T18:46:34.171Z] Copying: 1013/1024 [MB] (18 MBps) [2024-11-29T18:46:34.433Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 18:46:34.361520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.528 [2024-11-29 18:46:34.361610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:14.528 [2024-11-29 18:46:34.361627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:14.528 [2024-11-29 18:46:34.361636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.528 [2024-11-29 18:46:34.361662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:14.528 [2024-11-29 18:46:34.362503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.528 [2024-11-29 18:46:34.362534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:14.528 [2024-11-29 18:46:34.362547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:33:14.528 [2024-11-29 18:46:34.362565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.528 [2024-11-29 18:46:34.362817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.528 [2024-11-29 18:46:34.362828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:14.528 [2024-11-29 18:46:34.362839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:33:14.528 [2024-11-29 18:46:34.362848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.528 [2024-11-29 18:46:34.362878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.528 [2024-11-29 18:46:34.362887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:14.528 [2024-11-29 18:46:34.362896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:14.528 [2024-11-29 18:46:34.362910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.528 [2024-11-29 18:46:34.362974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.528 [2024-11-29 18:46:34.362984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:14.528 [2024-11-29 18:46:34.362993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:14.528 [2024-11-29 18:46:34.363001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.528 [2024-11-29 18:46:34.363016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:14.528 [2024-11-29 18:46:34.363033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:14.529 [2024-11-29 18:46:34.363043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.363993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.364001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.364008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.364015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:14.529 [2024-11-29 18:46:34.364023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:14.530 [2024-11-29 18:46:34.364125] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:14.530 [2024-11-29 18:46:34.364139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 941bac55-3541-4ffc-a49c-047f115f854f 00:33:14.530 [2024-11-29 18:46:34.364147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:14.530 [2024-11-29 18:46:34.364155] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:33:14.530 [2024-11-29 18:46:34.364163] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:33:14.530 [2024-11-29 18:46:34.364174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:33:14.530 [2024-11-29 18:46:34.364189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:14.530 [2024-11-29 18:46:34.364197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:14.530 [2024-11-29 18:46:34.364205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:14.530 [2024-11-29 18:46:34.364213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:14.530 [2024-11-29 18:46:34.364220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:14.530 [2024-11-29 18:46:34.364227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.530 [2024-11-29 18:46:34.364236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:14.530 [2024-11-29 18:46:34.364244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:33:14.530 [2024-11-29 18:46:34.364252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.366588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.530 [2024-11-29 18:46:34.366629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:14.530 [2024-11-29 18:46:34.366645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:33:14.530 [2024-11-29 18:46:34.366657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.366775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.530 [2024-11-29 18:46:34.366783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:14.530 [2024-11-29 18:46:34.366792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:14.530 [2024-11-29 18:46:34.366799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.374693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.374848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:14.530 [2024-11-29 18:46:34.374905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.374928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.375013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.375035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:14.530 [2024-11-29 18:46:34.375063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.375090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.375170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.375268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:14.530 [2024-11-29 18:46:34.375294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.375314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.375345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.375366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:14.530 [2024-11-29 18:46:34.375386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.375405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.391029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.391213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:14.530 [2024-11-29 18:46:34.391269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.391292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.403019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.403198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:14.530 [2024-11-29 18:46:34.403266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.403289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.403362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.403392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:14.530 [2024-11-29 18:46:34.403417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.403436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.403512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.403536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:14.530 [2024-11-29 18:46:34.403559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.403629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.403708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.403733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:14.530 [2024-11-29 18:46:34.403753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.403776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.403814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.403999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:14.530 [2024-11-29 18:46:34.404041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.404060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.404124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.404146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:14.530 [2024-11-29 18:46:34.404166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.404239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.404301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.530 [2024-11-29 18:46:34.404325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:14.530 [2024-11-29 18:46:34.404345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.530 [2024-11-29 18:46:34.404402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.530 [2024-11-29 18:46:34.404594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.029 ms, result 0 00:33:14.792 00:33:14.792 00:33:14.792 18:46:34 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:17.340 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:17.340 Process with pid 94498 is not found 00:33:17.340 Remove shared memory files 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94498 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94498 ']' 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94498 00:33:17.340 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94498) - No such process 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94498 is not found' 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_band_md /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_l2p_l1 /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_l2p_l2 /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_l2p_l2_ctx /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_nvc_md /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_p2l_pool /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_sb /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_sb_shm /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_trim_bitmap /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_trim_log /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_trim_md /dev/hugepages/ftl_941bac55-3541-4ffc-a49c-047f115f854f_vmap 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:17.340 ************************************ 00:33:17.340 END TEST ftl_restore_fast 00:33:17.340 ************************************ 00:33:17.340 00:33:17.340 real 4m19.311s 00:33:17.340 user 4m6.642s 00:33:17.340 sys 0m12.121s 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.340 18:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:17.340 Process with pid 86374 is not found 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@14 -- # killprocess 86374 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 86374 ']' 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@958 -- # kill -0 86374 00:33:17.340 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86374) - No such process 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86374 is not found' 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:17.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97149 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97149 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@835 -- # '[' -z 97149 ']' 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.340 18:46:36 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.340 18:46:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:17.340 [2024-11-29 18:46:36.909011] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:33:17.340 [2024-11-29 18:46:36.909325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97149 ] 00:33:17.340 [2024-11-29 18:46:37.061608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.340 [2024-11-29 18:46:37.081096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.914 18:46:37 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.914 18:46:37 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:17.914 18:46:37 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:18.176 nvme0n1 00:33:18.176 18:46:38 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:18.176 18:46:38 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:18.176 18:46:38 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:18.435 18:46:38 ftl -- ftl/common.sh@28 -- # stores=0efa006f-1dd1-4956-8927-aa8a0d7b8927 00:33:18.435 18:46:38 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:18.435 18:46:38 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0efa006f-1dd1-4956-8927-aa8a0d7b8927 00:33:18.695 18:46:38 ftl -- ftl/ftl.sh@23 -- # killprocess 97149 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 97149 ']' 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@958 -- # kill -0 97149 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@959 -- # uname 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97149 00:33:18.695 killing process with pid 97149 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97149' 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@973 -- # kill 97149 00:33:18.695 18:46:38 ftl -- common/autotest_common.sh@978 -- # wait 97149 00:33:18.954 18:46:38 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:19.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:19.216 Waiting for block devices as requested 00:33:19.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.477 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.477 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:24.772 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:24.772 18:46:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:24.772 18:46:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:24.772 Remove shared memory files 00:33:24.772 18:46:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:24.772 18:46:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:24.772 18:46:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:24.772 18:46:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:24.772 18:46:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:24.772 ************************************ 00:33:24.772 END TEST ftl 00:33:24.772 ************************************ 00:33:24.772 00:33:24.772 real 16m9.187s 00:33:24.772 user 18m6.931s 00:33:24.772 sys 1m21.372s 00:33:24.772 18:46:44 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.772 18:46:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:24.772 18:46:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:24.772 18:46:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:24.772 18:46:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:24.772 18:46:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:24.772 18:46:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:24.772 18:46:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:24.772 18:46:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:24.772 18:46:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:24.772 18:46:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:24.772 18:46:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:24.772 18:46:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.772 18:46:44 -- common/autotest_common.sh@10 -- # set +x 00:33:24.772 18:46:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:24.772 18:46:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:24.772 18:46:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:24.772 18:46:44 -- common/autotest_common.sh@10 -- # set +x 00:33:26.168 INFO: APP EXITING 00:33:26.168 INFO: killing all VMs 00:33:26.168 INFO: killing vhost app 00:33:26.168 INFO: EXIT DONE 00:33:26.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.004 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:27.004 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:27.004 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:27.004 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:27.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.841 Cleaning 00:33:27.841 Removing: /var/run/dpdk/spdk0/config 00:33:27.841 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:27.841 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:27.841 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:27.841 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:27.841 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:27.841 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:27.841 Removing: /var/run/dpdk/spdk0 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69324 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69482 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69678 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69760 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69789 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69900 00:33:27.841 Removing: /var/run/dpdk/spdk_pid69918 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70095 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70169 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70248 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70343 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70423 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70463 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70494 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70564 00:33:27.841 Removing: /var/run/dpdk/spdk_pid70637 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71062 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71110 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71156 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71167 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71230 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71246 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71304 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71315 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71362 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71380 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71422 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71435 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71573 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71604 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71682 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71854 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71916 00:33:27.841 Removing: /var/run/dpdk/spdk_pid71947 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72379 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72474 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72577 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72618 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72641 00:33:27.841 Removing: /var/run/dpdk/spdk_pid72714 00:33:27.841 Removing: /var/run/dpdk/spdk_pid73331 00:33:27.841 Removing: /var/run/dpdk/spdk_pid73357 00:33:27.841 Removing: /var/run/dpdk/spdk_pid73818 00:33:27.841 Removing: /var/run/dpdk/spdk_pid73910 00:33:27.841 Removing: /var/run/dpdk/spdk_pid74014 00:33:27.841 Removing: /var/run/dpdk/spdk_pid74055 00:33:27.841 Removing: /var/run/dpdk/spdk_pid74076 00:33:27.841 Removing: /var/run/dpdk/spdk_pid74096 00:33:27.841 Removing: /var/run/dpdk/spdk_pid75921 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76036 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76045 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76063 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76103 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76107 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76119 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76164 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76168 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76180 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76225 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76229 00:33:27.841 Removing: /var/run/dpdk/spdk_pid76241 00:33:27.841 Removing: /var/run/dpdk/spdk_pid77628 00:33:27.841 Removing: /var/run/dpdk/spdk_pid77714 00:33:27.841 Removing: /var/run/dpdk/spdk_pid79112 00:33:27.841 Removing: /var/run/dpdk/spdk_pid80836 00:33:27.841 Removing: /var/run/dpdk/spdk_pid80894 00:33:27.841 Removing: /var/run/dpdk/spdk_pid80959 00:33:27.841 Removing: /var/run/dpdk/spdk_pid81059 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81145 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81230 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81287 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81351 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81450 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81541 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81631 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81683 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81753 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81846 00:33:28.102 Removing: /var/run/dpdk/spdk_pid81932 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82017 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82080 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82144 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82243 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82322 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82408 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82471 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82534 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82603 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82666 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82764 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82844 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82927 00:33:28.102 Removing: /var/run/dpdk/spdk_pid82985 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83048 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83117 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83180 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83284 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83364 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83502 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83770 00:33:28.102 Removing: /var/run/dpdk/spdk_pid83800 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84245 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84435 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84521 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84630 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84662 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84689 00:33:28.102 Removing: /var/run/dpdk/spdk_pid84974 00:33:28.102 Removing: /var/run/dpdk/spdk_pid85013 00:33:28.102 Removing: /var/run/dpdk/spdk_pid85068 00:33:28.102 Removing: /var/run/dpdk/spdk_pid85431 00:33:28.102 Removing: /var/run/dpdk/spdk_pid85578 00:33:28.102 Removing: /var/run/dpdk/spdk_pid86374 00:33:28.102 Removing: /var/run/dpdk/spdk_pid86491 00:33:28.102 Removing: /var/run/dpdk/spdk_pid86644 00:33:28.102 Removing: /var/run/dpdk/spdk_pid86726 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87017 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87282 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87630 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87784 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87917 00:33:28.102 Removing: /var/run/dpdk/spdk_pid87953 00:33:28.102 Removing: /var/run/dpdk/spdk_pid88113 00:33:28.102 Removing: /var/run/dpdk/spdk_pid88132 00:33:28.102 Removing: /var/run/dpdk/spdk_pid88174 00:33:28.102 Removing: /var/run/dpdk/spdk_pid88405 00:33:28.102 Removing: /var/run/dpdk/spdk_pid88617 00:33:28.102 Removing: /var/run/dpdk/spdk_pid89146 00:33:28.102 Removing: /var/run/dpdk/spdk_pid89770 00:33:28.102 Removing: /var/run/dpdk/spdk_pid90264 00:33:28.102 Removing: /var/run/dpdk/spdk_pid91046 00:33:28.102 Removing: /var/run/dpdk/spdk_pid91188 00:33:28.102 Removing: /var/run/dpdk/spdk_pid91265 00:33:28.102 Removing: /var/run/dpdk/spdk_pid91728 00:33:28.102 Removing: /var/run/dpdk/spdk_pid91775 00:33:28.102 Removing: /var/run/dpdk/spdk_pid92407 00:33:28.102 Removing: /var/run/dpdk/spdk_pid92806 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93532 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93656 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93691 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93752 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93807 00:33:28.102 Removing: /var/run/dpdk/spdk_pid93854 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94027 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94096 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94164 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94214 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94249 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94365 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94498 00:33:28.102 Removing: /var/run/dpdk/spdk_pid94709 00:33:28.102 Removing: /var/run/dpdk/spdk_pid95132 00:33:28.102 Removing: /var/run/dpdk/spdk_pid95814 00:33:28.102 Removing: /var/run/dpdk/spdk_pid96489 00:33:28.102 Removing: /var/run/dpdk/spdk_pid97149 00:33:28.102 Clean 00:33:28.364 18:46:48 -- common/autotest_common.sh@1453 -- # return 0 00:33:28.364 18:46:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:28.364 18:46:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.364 18:46:48 -- common/autotest_common.sh@10 -- # set +x 00:33:28.364 18:46:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:28.364 18:46:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.364 18:46:48 -- common/autotest_common.sh@10 -- # set +x 00:33:28.364 18:46:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:28.364 18:46:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:28.364 18:46:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:28.364 18:46:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:28.364 18:46:48 -- spdk/autotest.sh@398 -- # hostname 00:33:28.364 18:46:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:28.625 geninfo: WARNING: invalid characters removed from testname! 00:33:55.218 18:47:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:57.130 18:47:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:59.680 18:47:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:02.228 18:47:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.613 18:47:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:06.914 18:47:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:08.297 18:47:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:08.557 18:47:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:08.557 18:47:28 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:08.557 18:47:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:08.557 18:47:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:08.557 18:47:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:08.557 + [[ -n 5770 ]] 00:34:08.557 + sudo kill 5770 00:34:08.567 [Pipeline] } 00:34:08.581 [Pipeline] // timeout 00:34:08.586 [Pipeline] } 00:34:08.599 [Pipeline] // stage 00:34:08.604 [Pipeline] } 00:34:08.617 [Pipeline] // catchError 00:34:08.625 [Pipeline] stage 00:34:08.627 [Pipeline] { (Stop VM) 00:34:08.639 [Pipeline] sh 00:34:08.976 + vagrant halt 00:34:11.523 ==> default: Halting domain... 00:34:16.845 [Pipeline] sh 00:34:17.155 + vagrant destroy -f 00:34:19.695 ==> default: Removing domain... 00:34:20.285 [Pipeline] sh 00:34:20.570 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:20.578 [Pipeline] } 00:34:20.591 [Pipeline] // stage 00:34:20.595 [Pipeline] } 00:34:20.606 [Pipeline] // dir 00:34:20.611 [Pipeline] } 00:34:20.622 [Pipeline] // wrap 00:34:20.627 [Pipeline] } 00:34:20.639 [Pipeline] // catchError 00:34:20.647 [Pipeline] stage 00:34:20.649 [Pipeline] { (Epilogue) 00:34:20.660 [Pipeline] sh 00:34:20.940 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:26.229 [Pipeline] catchError 00:34:26.231 [Pipeline] { 00:34:26.244 [Pipeline] sh 00:34:26.528 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:26.528 Artifacts sizes are good 00:34:26.537 [Pipeline] } 00:34:26.549 [Pipeline] // catchError 00:34:26.558 [Pipeline] archiveArtifacts 00:34:26.565 Archiving artifacts 00:34:26.675 [Pipeline] cleanWs 00:34:26.688 [WS-CLEANUP] Deleting project workspace... 00:34:26.688 [WS-CLEANUP] Deferred wipeout is used... 00:34:26.695 [WS-CLEANUP] done 00:34:26.697 [Pipeline] } 00:34:26.711 [Pipeline] // stage 00:34:26.716 [Pipeline] } 00:34:26.725 [Pipeline] // node 00:34:26.729 [Pipeline] End of Pipeline 00:34:26.762 Finished: SUCCESS